id
stringlengths
54
56
text
stringlengths
0
1.34M
source
stringclasses
1 value
added
stringdate
2025-03-18 00:34:10
2025-03-18 00:39:48
created
stringlengths
3
51
metadata
dict
https://oercommons.org/courseware/lesson/66281/overview
The Interplay Between the Legislature and Executive Overview The Interplay Between the Legislature and Executive Learning Objective By the end of this section, you will be able to: - Compare and contrast the powers of the legislative and executive branches of government Introduction This section describes the various roles played by the Legislative and Executive branches in governing Texas. Roles Played by the Legislative and Executive Branches The executive and legislative branches of government play an interesting tug- of-war with public policy in Texas in a slightly different way than in the federal government. The Texas Legislature has much more initial control over the budget process than the governor. The Legislative Budget Board (LBB), in which the governor plays no part, is an entirely legislative agency, and prepares the state’s draft budget under the direction of legislative leaders. This legislature-driven budget, however, starts from a number generated by a different member of the executive branch – the Comptroller of Public Accounts. The Comptroller’s BRE (biennial revenue estimate) is the initial estimate of what the state’s total revenue will be over the two-year budget cycle and is a preview of the number the Comptroller will use at the end of the legislative session to “certify” the budget. Without certification by the Comptroller, the state budget cannot take effect, and legislators would be required to start over. At the end of the session, however, the governor’s office experiences a power surge seen in no other state. A governor can veto most bills after the legislature has finally adjourned, removing the threat of an override. The governor also has “line-item veto” authority, allowing him to veto individual spending items from the state budget without vetoing the entire bill. As with other vetoes – his line-item vetoes can be made after the legislative session has ended. While the legislature has the sole power to make law in Texas, executive branch agencies have significant latitude to interpret state statutes through agency rulemaking. Legislators, aware and and somewhat wary of this, require a special statement attached to the official analysis of every bill considered on the floor of the House or Senate disclosing whether the bill delegates any rulemaking authority to any state official or agency. The Texas Attorney General also brings some interpretive power to the equation. With the power to issue a formal Attorney General’s Opinion, this official can sometimes make public policy decisions separately from the legislature, and without the judicial branch. An Attorney General’s Opinion in Texas has the force of law until a court rules otherwise, or the legislature changes the law on which the opinion is based. Licenses and Attributions CC LICENSED CONTENT, ORIGINAL The Legislative and Executive Branches. Authored by: Andrew Teas. License: CC BY: Attribution
oercommons
2025-03-18T00:35:05.983970
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66281/overview", "title": "Texas Government 2.0, The Executive Department and the Office of the Governor of Texas", "author": null }
https://oercommons.org/courseware/lesson/66282/overview
The Informal Powers of the Executive Branch Overview The Informal Powers of the Executive Branch Learning Objective By the end of this section, you will be able to: - Discuss the informal powers of the executive branch Introduction A Governor's powers are not limited to their constitutional and statutory authority. This section discusses a Governor's informal powers. The Informal Powers of the Executive Branch In addition to the formal powers of the governor and other executive branch officials, a smart governor can accomplish a lot using informal powers. Governor George W. Bush was legendary for his ability to forge genuine friendships with other state officials – notably House Speaker Pete Laney and Lieutenant Governor Bob Bullock. The three had breakfast at the Governor’s Mansion weekly during legislative sessions. When he announced his candidacy for the Republican nomination for President in 1999, Speaker Laney, a Democrat, introduced him. Friendly late night meetings over a beer or two helped Governor Bush and some of his staunchest political opponents find common ground on a variety of policy issues. The Texas Governor has the highest-profile role of any state official and can use that to his advantage. An endorsement from a governor can mean a lot in a race for the state house or senate, and a grateful legislator should be eager to return the favor. Conversely, Governor Greg Abbott actively worked against the reelection of two legislators from his own party in 2018 – helping to defeat one. The governor's appointment power to appoint members to boards, commissions, councils, and committees can provide the governor with significant informal power over policy in many key areas. The executive branch of the Texas government is made up of over 400 state boards, commissions, and agencies. Finally, the governor’s unilateral post-session veto power creates a lot of informal leverage during the legislative session. A legislative bill author asked by the governor to support a change to his bill – even a drastic one – has little alternative, knowing the bill can be vetoed with no opportunity for an override vote. Licensing and Attribution CC LICENSED CONTENT, ORIGINAL The Informal Powers of the Executive Branch. Authored by: Andrew Teas. License: CC BY: Attribution
oercommons
2025-03-18T00:35:06.000891
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66282/overview", "title": "Texas Government 2.0, The Executive Department and the Office of the Governor of Texas", "author": null }
https://oercommons.org/courseware/lesson/66283/overview
The Plural Executive Overview The Plural Executive Learning Objective By the end of this section, you will be able to: - Explain Texas’ plural executive and discuss the various offices and powers Introduction Texas fragmented the Governor's power at the end of Reconstruction and dispersed executive power by creating a plural executive. This section discusses Texas' plural executive. Texas' Plural Executive Article 4 of the Texas Constitution describes the executive department (branch) of Texas. Texas utilizes a plural executive which means the power of the Governor is limited and distributed amongst other government officials. In other words, there is not one government official in Texas that is solely responsible for the Texas Executive Branch. The state bureaucracy in Texas has numerous state boards, commissions, councils, and committees. Additionally, several major agencies within the plural executive have administrative and advisory functions. Below are some of the members of the Texas Plural Executive and their roles: The lieutenant governor is technically a member of the executive branch, but with duties that are mostly legislative. While not a member of the Senate, he serves as the state senate's presiding officer - not in a ceremonial role such as that served by the United States Vice President over the U.S. Senate, but as the state senate's day-to-day leader. He is also first in line of succession for Governor, member of the Legislative Redistricting Board and Chair of the Legislative Budget Board. He is elected statewide and serves a four-year term. The current lieutenant governor is Dan Patrick, a former state senator, and former television sports anchor from Houston. The Texas Attorney General serves at the official lawyer for the State of Texas representing the state on civil matters and is responsible for interpreting the application of statutory law in the absence of an applicable court ruling. His office has additional duties relating to child support enforcement and consumer protection. Elected to a four-year term statewide, the current attorney general is Ken Paxton, a former state senator from the Dallas area. The Commissioner of the General Land Office is the state's real estate asset manager - an unusual position for voters to choose in a statewide election until you remember that Texas, as a condition of admission to the United States in 1845, maintained state ownership of vast amounts of public land that would have become federal in most other states. The leasing of public land for everything from oil exploration to grazing has been an important source of funding for state universities and public schools. The land commissioner is also responsible for Texas' 367 miles of Gulf Coast beach and has played an increasingly central role in managing disaster relief funds since Hurricane Harvey in 2017. Elected to a four-year term, the current commissioner is George P. Bush, nephew of former President George H. W. Bush. The Comptroller of Public Accounts is the state's independently-elected chief financial officer. Even if passed by the legislature and signed by the governor, the state's biennial budget cannot take effect unless "certified" by the Comptroller - his official finding that the budgeted amount will not exceed the amount of revenue he believes the state will collect during the budget period. The Comptroller is also the state's tax collector and banker. Glenn Hegar, a former state representative and senator from Katy, is the current Comptroller. The Texas Agriculture Commissioner is elected to both promote and regulate Texas agriculture, which some perceive as a potential conflict. He administers the Texas Agriculture Department, the duties of which include weights and measures - including gasoline. Inspectors check every gas pump in Texas periodically to make sure consumers are receiving the amount they purchase. The current Agriculture Commissioner is Sid Miller, a former state representative from Stephenville. The Texas Railroad Commission consists of three commissioners, all elected statewide, who serve staggered six-year terms. Originally created to regulate intrastate rail commerce, that task was largely assumed by the federal government, leaving the Commission to take on other tasks. During the Great Depression, the Commission was given the responsibility of regulating the Texas oil industry, which was a substantial percentage of the world's oil industry in the early Twentieth Century. By setting an "allowable" for every oil well in Texas - the maximum amount that could be legally extracted - the Texas Railroad Commission basically set the global price of oil for many years. The Organization of Petroleum Exporting Countries (OPEC) used the Texas Railroad Commission as their model for creating a worldwide oil cartel in 1960. The commission still has some authority over gas utilities, pipeline safety, liquified natural gas production, surface coal, and uranium mining. The Texas State Board of Education is the largest elected body in the state's executive branch, with 15 members elected from single-member districts. Chaired by Donna Bahorich, of Houston, the Board is charged with setting curriculum standards, reviewing textbooks, establishing graduation requirements, overseeing the Texas Permanent School Fund, and approving new charter schools. The Board works with the Texas Education Agency, which is administered by a Commissioner of Education appointed by the governor, not the Board. The current Commissioner of Education is Mike Morath, a software developer who served on the Dallas Independent School District Board before his appointment by Governor Greg Abbott in 2016. The Texas Secretary of State is not elected but is appointed by the governor and confirmed by the state senate. The Secretary of State has a variety of duties, including administration of elections within Texas, publishing the Texas Register (which notifies the public of proposed and final state agency rules), and advising the governor on border matters. The Secretary of State also presides over the Texas House of Representatives at the beginning of each legislative session, presiding over the election of his replacement to serve as Speaker of the House. Governor Abbott appointed John Scott — a Fort Worth attorney who briefly represented former President Donald Trump in a lawsuit challenging the 2020 election results in Pennsylvania — as Texas' new secretary of state on October 21, 2021. Abbott announced Scott’s appointment two days after the end of the third special legislative session. That means the Senate will not have to confirm him until the next time it meets, which is currently scheduled for January 2023. Other executive branch officials include hundreds of appointees to state boards and commissions from the powerful to the obscure. The Texas Transportation Commission oversees billions in highway funding, while the Board of Criminal Justice oversees one of the nations’ largest prison systems. Texas also has a state poet laureate, a state musician and two-state artists – one for two-dimensional and one for three-dimensional media. Licenses and Attributions CC LICENSED CONTENT, ORIGINAL The Texas Plural Executive. Authored by: Daniel M. Regalado. License: CC BY: Attribution CC LICENSED CONTENT, ADAPTATION The Texas Plural Executive: Revision and Adaptation. Authored by: Andrew Teas. License: CC BY: Attribution
oercommons
2025-03-18T00:35:06.021701
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66283/overview", "title": "Texas Government 2.0, The Executive Department and the Office of the Governor of Texas", "author": null }
https://oercommons.org/courseware/lesson/66284/overview
Glossary Overview Glossary Glossary: Texas' Governor and Executive Branch appointment: the power of the chief executive, whether the president of the United States or the governor of the state, to appoint persons to office. attorney general: an elected state official that serves as the state's chief civil lawyer bureaucracy: the complex structure of offices, tasks, rules, and principles of organization that is employed by all large-scale institutions to coordinate the work of their personnel. comptroller: an elected state official who directs the collection of taxes and other revenues, and estimates revenues for the budgeting process. executive budget: the state budget prepared and submitted by the governor of the legislature, which indicates the governor's spending priorities. land commissioner: an elected state official that acts as the manager of the most publicly-owned lands. lieutenant governor: the second-highest elected official in the state and president of the state senate line-item veto power: enables the governor to veto individual components (or lines) of an appropriations bill. plural executive: a group of officers or major officials that functions in making current decisions or in giving routine orders typically the responsibility of an individual executive officer or official. In Texas, the power of the Governor is limited and distributed amongst other government officials. secretary of state: the state official, appointed by the governor, whose primary responsibility is administering elections veto: the governor's power to turn down legislation; can be overridden by a two-thirds vote of both the House and Senate Licenses and Attributions CC LICENSED CONTENT, ORIGINAL The Executive Department and the Office of the Governor of Texas: Glossary Authored by: Andrew Teas. License: CC BY: Attribution
oercommons
2025-03-18T00:35:06.038169
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66284/overview", "title": "Texas Government 2.0, The Executive Department and the Office of the Governor of Texas", "author": null }
https://oercommons.org/courseware/lesson/66285/overview
Assessment Overview This is a quiz for Chapter Four. Texas Government Chapter Four Quiz Check your knowledge of Chapter Four by taking the quiz linked below. The quiz will open in a new browser window or tab. This is a quiz for Chapter Four. Check your knowledge of Chapter Four by taking the quiz linked below. The quiz will open in a new browser window or tab.
oercommons
2025-03-18T00:35:06.055078
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66285/overview", "title": "Texas Government 2.0, The Executive Department and the Office of the Governor of Texas", "author": null }
https://oercommons.org/courseware/lesson/66293/overview
Local Government in Texas Overview Local Government in Texas Chapter Learning Objective By the end of this chapter, you will be able to: - Describe the roles and responsibilities of local political systems in Texas Introduction Voters in Texas have leaned conservative throughout the state’s history, albeit with a bit of a progressive streak. Even as the nature of the Republican and Democratic parties have changed, the basic ideology of Texas voters has been fairly reliable. One major change, though, has been the geographic distribution of liberals and conservatives. In recent elections, urban areas have grown increasingly liberal as rural areas have grown more conservative. This has created an interesting conflict with respect to the nature of local governments in Texas. Conservative lawmakers have historically supported the concept of local control, letting local governments – especially cities – conduct their business as they please, relying on local voters to keep regulatory overreach in check. More liberal urban voters, though, have shown a higher level of comfort with government regulation and authority that conservative voters generally oppose. Austin, arguably the most liberal of Texas cities, has enacted ordinances banning grocery stores from offering plastic bags to customers, placing red-light cameras at intersections to automatically ticket drivers who fail to come to a complete stop before making a right turn - even requiring apartment properties to participate in voluntary federal low-income housing programs The San Antonio City Council, meanwhile, refused to allow Chick-fil-A restaurants in the city’s airport because of the company’s financial support of organizations like the Salvation Army and the Fellowship of Christian Athletes, which city leaders felt were insufficiently supportive of gay and lesbian issues. In 2019, the Texas Legislature passed a number of bills to reign in local government policies it deemed out of control. Red-light cameras were banned statewide. A “Save Chick-fil-A bill” prohibiting cities from refusing to do business with companies that partner with religious groups was enacted. A bill prohibiting new partnerships between cities and abortion providers was also signed into law. As the demographics and politics of Texas change, how will future legislators expand or contract the powers of local governments? Local Government in Texas Understanding government in Texas is impossible without a study of local governments. Texas has one state government, which operates under the authority of one federal government. Under that umbrella, however, are 254 counties, 1214 cities, 1079 independent school districts and 2600 special purpose districts that cover everything from rural fire prevention to mosquito control. Let’s take a look at the types of local government in Texas. License and Attribution CC LICENSED CONTENT, ORIGINAL Local Government in Texas: Introduction. Authored by: Andrew Teas. License: CC BY: Attribution
oercommons
2025-03-18T00:35:06.072968
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66293/overview", "title": "Texas Government 2.0, Local Government in Texas", "author": null }
https://oercommons.org/courseware/lesson/66294/overview
The Relationship Between Local, State, and National Government Overview The Relationship Between Local, State, and National Government Learning Objectives By the end of this section, you will be able to: - Explain the relationship between the local government, state government, and national government Introduction This section explores the interrelationship between local, state, and national government. The Relationship Between Local, State, and National Government Whereas the federal government and state governments share power in countless ways, a local government must be granted power by the state. The way power is granted and limited is different for different types of local government. Counties are general-law forms of government, created specifically by the state. Geographically, counties are like puzzle pieces - every square inch of Texas is in one of the state's 254 counties. Counties are given specific powers by the state under the Constitution and state statutes and have virtually no flexibility. Cities, on the other hand, are created by their citizens, who apply for a charter to create one. Most of Texas does not lie within the city limits of any city. While small cities operate much like counties, with specific powers granted and limited by the state, larger "home rule" cities have tremendous flexibility. Cities like Austin have passed ordinances expanding the concept of a municipal government into social justice and environmental regulation areas that have prompted the state legislature to begin limiting the powers of home rule cities. "Preemption" laws - state laws limiting the powers of local governments - are controversial. Conservatives comprise the majority of both chambers of the state legislature and historically favor the concept of local control. As voters in many urban areas trend more progressive, favoring social justice and environmental regulations beyond those favored by state lawmakers, the concept of local control begins to clash with the legislature's basic ideological standards. Cities sometimes derive power and funding directly from the federal government. Most large Texas cities have been granted "substantial equivalency" by the U.S. Department of Housing and Urban Development, meaning the city's Fair Housing ordinance is basically the same as the national law. Those cities are empowered to an extent to enforce the Federal Fair Housing Act on the federal government's behalf. Licenses and Attributions CC LICENSED CONTENT, ORIGINAL Revision and Adaptation. Authored by: Kris S. Seago. License: CC BY: Attribution
oercommons
2025-03-18T00:35:06.091680
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66294/overview", "title": "Texas Government 2.0, Local Government in Texas", "author": null }
https://oercommons.org/courseware/lesson/66295/overview
Municipal (City) Government Overview Municipal (City) Government Learning Objective By the end of this section, you will be able to: - Explain the structure and function of municipal government in Texas Introduction This section discusses the structure and function of municipal government in Texas. General Law and Home Rule Cities While every square inch of Texas is included in one of its 254 counties, not all of Texas falls inside the limits of a city. Cities are created by its citizens, who are granted a charter by the state of Texas in the same way a corporation operates under a state charter. Cities can be organized in two basic ways, depending on their size. Cities with a population of less than 5000 people can only exist as general law cities. A general law city has only the powers specifically granted by the legislature, which do not include broad annexation or regulatory powers. Cities with a population greater than 5000 may elect home rule status. Home rule cities can do virtually anything they want that isn’t prohibited by the legislature – leading to some of the issues discussed at the beginning of this chapter. Larger cities (those exceeding 225,000) have a unique authority: that of “limited annexation”, whereby an adjoining area may be annexed for purposes of imposing city ordinances related to safety and building codes. The residents can vote for mayor and council races but cannot vote in bond elections (and, consequently, the city cannot directly collect city sales tax from businesses or city property tax from owners). The City of Houston has exploited a provision in the state law that allows it to share in sales tax revenues along with special districts (municipal utility districts, for instance) that cross an area “annexed for limited purposes.” This has led to a spiderwebbing known as limited purpose or special purpose annexations that consist of mostly commercial properties facing major streets. These extend through otherwise unincorporated areas in what is known as the city’s extraterritorial jurisdiction (ETJ), which, for Houston, extends five miles beyond its city limits. This has led to conflicts between city and county officials over the provision of services to these areas not included in the agreements. The purpose of limited annexation is to allow the city to control development in an area that it eventually will fully annex; it is meant to do so within three years (though it can arrange “non-annexation agreements” with local property owners), and those agreements with municipal utility districts also cloud the picture. During each of the three years, the city is to develop land-use planning for the area (zoning, for example), identify needed capital improvements and ongoing projects, and identify the financing for such as well as to provide essential municipal services. Municipal elections in Texas are nonpartisan in the sense that candidates do not appear on the ballot on party lines, and do not run as party tickets. However, a candidate’s party affiliation is usually known or can be discerned with minimal effort (as the candidate most likely has supported other candidates on partisan tickets). In some instances, an informal citizen’s group will support a slate of candidates that it desires to see elected (often in opposition to an incumbent group with which it disagreed on an issue). However, each candidate must be voted on individually. Governance Who runs city government? Cities in Texas can be organized in a variety of ways. The most common structure is the council-manager form of government. Citizens in San Antonio decided long ago that the political skill set required to be elected mayor of the city was not necessarily the skill set required to manage the day-to-day operations of a municipal water and sewer system with more than 12,000 miles of pipe – enough to stretch from Texas to Australia. While the mayor of San Antonio presides over council meetings, the daily operations of city government are overseen by a professional city manager, who is hired by the city council for that purpose. Most major Texas cities, including Austin, Galveston, Dallas, and Fort Worth, use a council-manager form of government. Houston, on the other hand, uses a strong-mayor form of government. The mayor of Houston not only presides over city council meetings, but is also the city’s chief executive officer. Houston’s strong-mayor system is considered especially strong since Houston mayors also have unilateral control over the city council agenda. On the other hand, Houston has a unique counterbalance in the form of an independently-elected city controller, a chief financial officer who must concur in all city expenditures and bond issues, and who can conduct independent audits of city departments. Some cities elect all their council member-at-large, meaning any qualified person who lives in the city can run for any position. Other cities have adopted single-member districts to ensure that every part of town has a council member looking after the needs of its residents. At-large systems are frequently criticized for making it difficult for members of racial minority groups to be elected. Single-member district systems are criticized for creating a “turf” mentality that places parts of town in competition with each other for parks and libraries, removing the political incentive for council members to consider the needs of the city as a whole. Houston has a mixed system, with five members elected at large, eleven from single-member districts. Licenses and Attributions CC LICENSED CONTENT, ORIGINAL Municipal (City) Government in Texas. Authored by: Andrew Teas. License: CC BY: Attribution
oercommons
2025-03-18T00:35:06.111895
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66295/overview", "title": "Texas Government 2.0, Local Government in Texas", "author": null }
https://oercommons.org/courseware/lesson/66296/overview
County Governments in Texas Overview County Governments in Texas Learning Objective By the end of this section, you will be able to: - Explain the structure and function of county government in Texas Introduction This section discusses the structure and function of Texas' 254 county governments. County Governments in Texas Texas has a total of 254 counties, by far the largest number of counties of any state. Under Spanish and, later, Mexican rule, Texas was divided into municipios, which, despite sharing a name origin with municipalities, were more like the counties of today – large districts containing one or more settlements and the surrounding rural land. When Texas became a Republic in 1836, the 23 municipios became counties, with a structure that changed only slightly before, during, and after the Civil War. By 1870, Texas had 129 counties, and the Constitution of 1876, still in place today, went into significant detail about their formation and operation. The last new county to be established was Loving County in 1931. More on Loving County later… The structure of county government in Texas is defined in the Constitution, so it’s not surprising that the form closely follows the plural-executive model of state government. Each Texas county is run in part by a five-member commissioners’ court consisting of a county judge, elected at-large, and four county commissioners elected from each of four precincts. Many county functions are run by independently elected officials, who answer directly to the voters, rather than to commissioners’ court. While county commissioners have authority over each official’s budget, they have little to say about the day-to-day administration of county offices. In most counties, these independently-elected officials include the county sheriff, the county attorney, the district attorney, the county clerk, the district clerk, the county treasurer, and the county tax assessor-collector as well as a number of judges that varies widely with the population of the county. County Judge While a county judge, particularly in rural counties, does have a judicial function, a county judge in Texas is primarily the chair of the county commissioners' court. He also plays an important role as head of the county's emergency management functions. County Commissioner County commissioners in Texas are incredibly powerful, especially in large counties. Not only do they vote on countywide issues as part of commissioners' court, they have almost unliateral control over the planning and constructions of roads, bridges and parks within their precinct, which is one-fourth of the county (by population). County Sheriff The sheriff is the county's chief law enforcement officer. He also manages the county jail and provides security for the county courts. County Attorney The County Attorney is the county's lawyer, providing legal advice and representing the county and its officials in all civil cases. This can present an interesting dilemma, since county officials are all independently elected. Sometimes a county official and the lawyer official representing him may be political opponents. The county attorney also pursues civil enforcement actions on behalf of the county. District Attorney The district attorney is the state's prosecutor, representing the government in criminal cases in that county's state district courts. County Clerk The county clerk is the county's custodian of records and documents, in charge of public records such as bonds, birth and death certificates, marriage licenses. The county clerk is also the chief election officer in most counties, administering elections and counting the votes. District Clerk The district clerk is the recordkeeper for all records pertaining to the state district courts in that county. He coordinates the jury selection process and manages court registry funds. County Treasurer The county treasurer is the county's banker - receiving and depositing all county revenues, preparing the county payroll ad recording all county expenditures and receipts. County Tax Assessor-Collector Part of the county tax assessor-collector title is somewhat misleading - all tax "assessment" is now done by appraisal districts. The "collector" part still applies, however. In addition to collecting all county property taxes, the county tax assessor-collector usually collects property taxes for other taxing jurisdictions within the county, such as school districts and cities. He also issues license plates and registration stickers, and handles voter registration. County officials are elected in partisan elections, and commissioner precincts are redrawn every ten years following the census to roughly equalize the population of each. Unlike other states, Texas does not allow for consolidated city-county governments. Cities and counties (as well as other political entities) are permitted to enter “interlocal agreements” to share services (for instance, a city and a school district may enter into agreements with the county whereby the county bills for and collects property taxes for the city and school district; thus, only one tax bill is sent instead of three). Texas does allow municipalities to merge, but populous Harris County, Texas consolidating with its primary city, Houston, Texas, to form the nation’s second-largest city (after New York City) is not a prospect under current law. Unlike cities, which can receive sales tax revenue, counties are funded almost entirely with property taxes. Counties in Texas are general-law units of government, with limited regulatory powers. In most counties, this doesn’t present a major problem. Populated areas are generally incorporated as cities, which have more extensive regulatory authority. Unincorporated areas – those areas outside the city limits of any city – have historically been rural areas with less need for regulation. Harris County, however, has become an important exception. Harris County’s population is nearly 5 million people as of 2019, with more than 2 million in the unincorporated area. If the unincorporated part of Harris County were a city, it would be the fifth-largest city in the United States. Fourteen states have fewer residents than the unincorporated part of Harris County, which has no building code and limited land use regulation. Meanwhile, in West Texas, Loving County has the exact same governance structure to administer a county with an estimated population of 152 – from which voters must choose at least a dozen elected county officials. Harris County sums up some of its challenges in its annual budget report: Harris County government provides services to all of the residents of the county. Most of the higher cost county functions including the courts system, Hospital District, county jail, and most of the county administrative functions are located within the City of Houston. County government is the primary provider of roads, parks, facilities, and law enforcement for the unincorporated areas. Harris County funds the county-wide and unincorporated area services primarily with property tax revenue. Despite the significant size and population of the unincorporated area, the county does not receive sales tax revenue to help fund services. The unique, ongoing challenge for Harris County government is to meet the needs of this rapidly growing unincorporated area without the funding sources provided to large cities in Texas. Most of the growth in expenditures in the County General Fund during this period has been for county-wide functions including law enforcement, the administration of justice, managing the jails, and the growing cost of indigent healthcare. As the population continues to grow, the demand for services, new roads, and expanded facilities in the unincorporated areas of the county will increase. Texas counties are prone to inefficient operations and are vulnerable to corruption, for several reasons. First, most of them do not have a merit system but operate on a spoils system, so that many county employees obtain their positions through loyalty to a particular political party and commissioner rather than whether they actually have the skills and experience appropriate to their positions. Second, most counties have not centralized purchasing into a single procurement department which would be able to seek quantity discounts and carefully scrutinize bids and contract awards for unusual patterns. Third, in 90 percent of Texas counties, each commissioner is individually responsible for planning and executing their own road construction and maintenance program for their own precinct, which can result in poor coordination and duplicate construction machinery. Licenses and Attributions CC LICENSED CONTENT, ORIGINAL County Government in Texas. Authored by: Andrew Teas. License: CC BY: Attribution
oercommons
2025-03-18T00:35:06.141647
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66296/overview", "title": "Texas Government 2.0, Local Government in Texas", "author": null }
https://oercommons.org/courseware/lesson/66297/overview
Special Districts in Texas Overview Special Districts in Texas Learning Objectives By the end of this section, you will be able to: - Explain the structure and function of special districts in Texas Introduction “In general, most citizens know comparatively little about the jurisdiction, structure, functions, and governance of special purpose districts, thus making them the invisible government of Texas.” Texans seem to love special purpose districts. As of 2014, Texas has approximately 3,350 of them, and the number increases every year. There are far more special purpose districts in Texas than cities and counties combined, yet most Texans know almost nothing about their function, structure, or governance. Special Districts in Texas Special purpose districts are governmental entities with specific geographic boundaries that are created to provide specific services such as drainage, water and sewer service, or firefighting. Districts can be created by the Texas Legislature, by local governmental bodies, or sometimes by a state agency. Districts are controlled by a board of directors, sometimes elected by voters but sometimes appointed by the legislature or the governing body of a local city or county. If you are taking this class from a community college, your college is a special- purpose district – a community college district created by the state legislature and funded mostly by an ad valorem tax levied on property within the district boundaries. Let’s look at some of the different types of special districts in Texas. School Districts The most common district in Texas is an independent school district. Texas has 1,031 school districts, which manage 9,317 public schools serving over five million kindergarten through 12th-grade students. The number of school districts has declined in recent years as some rural districts have consolidated as student populations dwindle. School districts generally levy a higher property tax rate than any other jurisdiction. The Katy Independent School District has a property tax rate of $1.52 per $100 of property value, more than three times the $0.49 rate of the City of Katy. Despite the cost to taxpayers and overall importance of their mission, school district trustees often receive less attention from voters and the press than city council members. A school district is governed by a board of trustees, elected in a non-partisan election – generally in May of odd-numbered years. Elected school trustees are volunteers, receiving no salary for their services. They hire a superintendent to run the day-to-day operations of the school district. The board of trustees sets the district property tax rate, approves the salary schedule for teachers and staff, and approve contracts for the construction and maintenance of school facilities and equipment. School districts are also transportation and food-service providers – often massive ones. The Houston Independent School District serves more than 269,000 meals and transports approximately 36,000 students to and from school on a fleet of nearly 1000 buses every school day. Community College Districts Texas has 50 community college districts, serving more than 700,000 students. Chosen by voters in non-partisan elections, community college district boards of trustees serve the same role as school district trustees for their colleges, setting the tax rate, salary schedules, and approving contracts for facilities, equipment and other needs. Community college boards hire a chancellor to run the district’s day-to-day operations. Municipal Utility Districts When developers create a new residential subdivision, generally outside the city limits of a nearby municipality, on previously rural land, how do those new homes get water and sewer service? Texas has long utilized the municipal utility district (MUD) to create that critical infrastructure. A MUD can be created by the legislature or by the Texas Commission on Environmental Quality with specific geographic boundaries. Once established, the residents of the district (sometimes a few development company employees moved into trailers specifically to be voters) vote to authorize the district to sell bonds – borrowing money from bondholders, who are paid back later with interest. The district uses the money raised from selling bonds to build a water and sewer system for the new subdivision. Homeowners then pay a property tax, as well as water and sewer rates for the water they use, to the district, which used that money to repay the bonds. Hospital Districts Indigent health care in Texas is left largely to county governments, which are often ill-equipped to deal with this complex task. Some counties have formed hospital districts to collect a property tax and provide health services. The Harris County Hospital District (now called simply Harris Health) collects a property tax of $0.17 per $100 of property valuation. With the $717 million that tax raised in 2018, Harris Health handled more than 161,000 emergency room visits and more than 1.7 million outpatient clinic visits. Hospital districts are run by boards of trustees, with members appointed by county commissioners’ courts. Other Districts In addition to the four categories of districts discussed above, Texas has dozens of other types of special-purpose districts from rural fire prevention districts, which provide fire protection services, to mosquito control districts that test for evidence of mosquito-borne diseases and spray insecticide. One interesting type of district is the Tax Increment Reinvestment Zone (TIRZ). A TIRZ can be created by the legislature or by a local jurisdiction and is a tool to jumpstart the improvement and redevelopment of a troubled area. The taxable value of commercial property in a TIRZ is “frozen” at a certain point in time, with the city – sometimes in partnership with other taxing jurisdictions – continues to collect taxes as if the value of property within the zone doesn’t change. If redevelopment efforts are successful in improving the area and raising property values, the TIRZ keeps the increment between what the city collects and what the city would have collected had the property value increase been applied, with that money being used to finance further improvements in the area – new and improved streets, parks, better drainage, even additional police patrols. To a city, in theory at least, they would likely have never collected that increment anyway, since property values in blighted areas tend to be stagnant. When the TIRZ expires, however, the city realizes a windfall of additional tax revenue, as well as an area with better infrastructure and a healthier business and residential climate. TIRZ board members are appointed by the governing body of the city that includes the TIRZ. LICENSES AND ATTRIBUTIONS CC LICENSED CONTENT, ORIGINAL Special Districts in Texas. Authored by: Andrew Teas. License: CC BY: Attribution
oercommons
2025-03-18T00:35:06.168223
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66297/overview", "title": "Texas Government 2.0, Local Government in Texas", "author": null }
https://oercommons.org/courseware/lesson/66298/overview
Glossary Overview Glossary Glossary: Local Government in Texas at-large election: an election in which officials are selected by voters of the entire geographical area, rather than from smaller districts within that area county clerks: public official who is the main record-keeper of the county county commissioner: government official (four per county) on the county commissioners' court whose main duty is the construction and maintenance of roads and bridges county commissioners' court: the main governing body of each county; has the authority to set the county tax rate and budget. county tax assessor-collector: public official who maintains the county tax records and collects taxes owed to the county district attorney: public official who prosecutes the more serious criminal cases in the district court home-rule charters: the rules under which a city operates; local governments have considerable independent governing power under these charters municipal utility district (MUD): a special district that offers services such as electricity water, sewage, and sanitation outside the city limits school district: a specific type of special district that provides public education in a designated area special district: a unit of local government that performs a single service, such as education or sanitation, within a limited geographic area Licenses and Attributions CC LICENSED CONTENT, ORIGINAL Local Government in Texas: Glossary. Authored by: Andrew Teas. License: CC BY: Attribution
oercommons
2025-03-18T00:35:06.187202
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66298/overview", "title": "Texas Government 2.0, Local Government in Texas", "author": null }
https://oercommons.org/courseware/lesson/66299/overview
Assessment Overview This is a quiz for Chapter Six. Texas Government Chapter Six Quiz Check your knowledge of Chapter Six by taking the quiz linked below. The quiz will open in a new browser window or tab. This is a quiz for Chapter Six. Check your knowledge of Chapter Six by taking the quiz linked below. The quiz will open in a new browser window or tab.
oercommons
2025-03-18T00:35:06.204254
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66299/overview", "title": "Texas Government 2.0, Local Government in Texas", "author": null }
https://oercommons.org/courseware/lesson/66352/overview
Public Opinion and the Media in Texas Overview Public Opinion and the Media in Texas Chapter Learning Objective By the end of this chapter, you will be able to: - Evaluate the role of public opinion and the media in Texas politics Introduction The collection of public opinion through polling and interviews is a part of political culture. Politicians want to know what the public thinks. Campaign managers want to know how citizens will vote. Media members seek to write stories about what the public wants. Every day, polls take the pulse of the people and report the results. And yet we have to wonder: Why do we care what people think? Over time, our beliefs and our attitudes about people, events, and ideas will become a set of norms, or accepted ideas, about what we may feel should happen in our society or what is right for the government to do in a situation. In this way, attitudes and beliefs form the foundation for opinions. As many a disappointed candidate knows, public opinion matters. The way opinions are formed and the way we measure public opinion also matters. But how much, and why? These are some of the questions we’ll explore in this chapter. Licensing and Attribution CC LICENSED CONTENT, ORIGINAL Revision and Adaptation. Authored by: panOpen. License: CC BY: Attribution CC LICENSED CONTENT, SHARED PREVIOUSLY American Government. Authored by: OpenStax. Provided by: OpenStax; Rice University. Located at: http://cnx.org/contents/5bcc0e59-7345-421d-8507- a1e4608685e8@18.11. License: CC BY: Attribution. License Terms: Download for free at http://cnx.org/contents/5bcc0e59-7345- 421d-8507-a1e4608685e8@18.11.
oercommons
2025-03-18T00:35:06.220840
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66352/overview", "title": "Texas Government 2.0, Public Opinion and the Media in Texas", "author": null }
https://oercommons.org/courseware/lesson/66353/overview
Public Opinion Overview Public Opinion Learning Objective By the end of this section, you will be able to: - Explain why public opinion is important and the beliefs and ideologies that shape public opinion Introduction: What Is Public Opinion? Public opinion is a collection of popular views about something, perhaps a person, a local or national event, or a new idea. For example, each day, a number of polling companies call Americans at random to ask whether they approve or disapprove of the way the president is guiding the economy. When situations arise internationally, polling companies survey whether citizens support U.S. intervention in places like Syria or Ukraine. These individual opinions are collected together to be analyzed and interpreted for politicians and the media. The analysis examines how the public feels or thinks, so politicians can use the information to make decisions about their future legislative votes, campaign messages, or propaganda. But where do people’s opinions come from? Most citizens base their political opinions on their beliefs and their attitudes, both of which begin to form in childhood and develop through political socialization. Beliefs are closely held ideas that support our values and expectations about life and politics. For example, the idea that we are all entitled to equality, liberty, freedom, and privacy is a belief most people in the United States share. We may acquire this belief by growing up in the United States or by having come from a country that did not afford these valued principles to its citizens. Our attitudes are also affected by our personal beliefs and represent the preferences we form based on our life experiences and values. A person who has suffered racism or bigotry may have a skeptical attitude toward the actions of authority figures, for example. While attitudes and beliefs are slow to change, ideology can be influenced by events. A student might leave college with a liberal ideology but become more conservative as she ages. A first-year teacher may view unions with suspicion based on second-hand information but change his mind after reading newsletters and attending union meetings. These shifts may change the way citizens vote and the answers they give in polls. For this reason, political scientists often study when and why such changes in ideology happen, and how they influence our opinions about government and politicians. Political Socialization At the same time that our beliefs and attitudes are forming during childhood, we are also being socialized; that is, we are learning from many information sources about the society and community in which we live and how we are to behave in it. Political socialization is the process by which we are trained to understand and join a country’s political world, and, like most forms of socialization, it starts when we are very young. We may first become aware of politics by watching a parent or guardian vote, for instance, or by hearing presidents and candidates speak on television or the Internet, or seeing adults honor the American flag at an event. As socialization continues, we are introduced to basic political information in school. We recite the Pledge of Allegiance and learn about the Founding Fathers, the Constitution, the two major political parties, the three branches of government, and the economic system. By the time we complete school, we have usually acquired the information necessary to form political views and be contributing members of the political system. A young man may realize he prefers the Democratic Party because it supports his views on social programs and education, whereas a young woman may decide she wants to vote for the Republican Party because its platform echoes her beliefs about economic growth and family values. Accounting for the process of socialization is central to our understanding of public opinion, because the beliefs we acquire early in life are unlikely to change dramatically as we grow older. Our political ideology, made up of the attitudes and beliefs that help shape our opinions on political theory and policy, is rooted in who we are as individuals. Our ideology may change subtly as we grow older and are introduced to new circumstances or new information, but our underlying beliefs and attitudes are unlikely to change very much, unless we experience events that profoundly affect us. For example, family members of 9/11 victims became more Republican and more political following the terrorist attacks. Similarly, young adults who attended political protest rallies in the 1960s and 1970s were more likely to participate in politics in general than their peers who had not protested. Today, polling agencies have noticed that citizens’ beliefs have become far more polarized, or widely opposed, over the last decade. According to some scholars, these shifts led partisanship to become more polarized than in previous decades, as more citizens began thinking of themselves as conservative or liberal rather than moderate. Public Opinion and Elections Elections are the events on which opinion polls have the greatest measured effect. Public opinion polls do more than show how we feel on issues or project who might win an election. The media use public opinion polls to decide which candidates are ahead of the others and therefore of interest to voters and worthy of interview. From the moment President Obama was inaugurated for his second term, speculation began about who would run in the 2016 presidential election. Within a year, potential candidates were being ranked and compared by a number of newspapers. The speculation included favorability polls on Hillary Clinton, which measured how positively voters felt about her as a candidate. The media deemed these polls important because they showed Clinton as the frontrunner for the Democrats in the next election. Polling is also at the heart of horserace coverage, in which, just like an announcer at the racetrack, the media calls out every candidate’s move throughout the presidential campaign. Horserace coverage can be neutral, positive, or negative, depending upon what polls or facts are covered. During the 2012 presidential election, the Pew Research Center found that both Mitt Romney and President Obama received more negative than positive horserace coverage, with Romney’s growing more negative as he fell in the polls. Horserace coverage is often criticized for its lack of depth; the stories skip over the candidates’ issue positions, voting histories, and other facts that would help voters make an informed decision. Yet, horserace coverage is popular because the public is always interested in who will win, and it often makes up a third or more of news stories about the election. Exit polls, taken the day of the election, are the last election polls conducted by the media. Announced results of these surveys can deter voters from going to the polls if they believe the election has already been decided. During presidential primary season, we see examples of the bandwagon effect, in which the media pays more attention to candidates who poll well during the fall and the first few primaries. Bill Clinton was nicknamed the “Comeback Kid” in 1992, after he placed second in the New Hampshire primary despite accusations of adultery with Gennifer Flowers. The media’s attention on Clinton gave him the momentum to make it through the rest of the primary season, ultimately winning the Democratic nomination and the presidency. Public opinion polls also affect how much money candidates receive in campaign donations. Donors assume public opinion polls are accurate enough to determine who the top two to three primary candidates will be, and they give money to those who do well. Candidates who poll at the bottom will have a hard time collecting donations, increasing the odds that they will continue to do poorly. Presidents running for reelection also must perform well in public opinion polls, and being in office may not provide an automatic advantage. Americans often think about both the future and the past when they decide which candidate to support. They have three years of past information about the sitting president, so they can better predict what will happen if the incumbent is reelected. That makes it difficult for the president to mislead the electorate. Voters also want a future that is prosperous. Not only should the economy look good, but citizens want to know they will do well in that economy. For this reason, daily public approval polls sometimes act as both a referendum of the president and a predictor of success. Public Opinion and Government The relationship between public opinion polls and government action is murkier than that between polls and elections. Like the news media and campaign staffers, members of the three branches of government are aware of public opinion. But do politicians use public opinion polls to guide their decisions and actions? The short answer is “sometimes.” The public is not perfectly informed about politics, so politicians realize public opinion may not always be the right choice. Yet many political studies, from the American Voter In the 1920s to the American Voter Revisited in the 2000s, have found that voters behave rationally despite having limited information. Individual citizens do not take the time to become fully informed about all aspects of politics, yet their collective behavior and the opinions they hold as a group make sense. They appear to be informed just enough, using preferences like their political ideology and party membership, to make decisions and hold politicians accountable during an election year. Overall, the collective public opinion of a country changes over time, even if party membership or ideology does not change dramatically. As James Stimson’s prominent study found, the public’s mood, or collective opinion, can become more or less liberal from decade to decade. While the initial study on public mood revealed that the economy has a profound effect on American opinion. Further studies have gone beyond to determine whether public opinion, and its relative liberalness, in turn affect politicians and institutions. This idea does not argue that opinion never affects policy directly, rather that collective opinion also affects the politician’s decisions on policy. Individually, of course, politicians cannot predict what will happen in the future or who will oppose them in the next few elections. They can look to see where the public is in agreement as a body. If public mood changes, the politicians may change positions to match the public mood. The more savvy politicians look carefully to recognize when shifts occur. When the public is more or less liberal, the politicians may make slight adjustments to their behavior to match. Politicians who frequently seek to win office, like House members, will pay attention to the long- and short-term changes in opinion. By doing this, they will be less likely to lose on Election Day. Presidents and justices, on the other hand, present a more complex picture. Link to Learning Policy Agendas Project The website of the Policy Agendas Project details a National Science Foundation-funded policy project to provide data on public opinion, presidential public approval, and a variety of governmental measures of activity. All data are coded by policy topic, so you can look for trends in a policy topic of interest to you to see whether government attention tracks with public opinion. References and Further Reading Gallup. 2015. Gallup Daily: Obama Job Approval Gallup: News; Rasmussen Reports. 2015. Daily Presidential Tracking Poll. Ras Reports; Roper Center (2015). Obama Presidential Approval. Roper Center. V. O. Key, Jr. 1966. The Responsible Electorate. Harvard University: Belknap Press. John Zaller. 1992. The Nature and Origins of Mass Opinion. Cambridge: Cambridge University Press. Eitan Hersh. 2013. "Long-Term Effect of September 11 on the Political Behavior of Victims’ Families and Neighbors." Proceedings of the National Academy of Sciences of the United States of America 110 (52): 20959–63. M. Kent Jennings. 2002. "Generation Units and the Student Protest Movement in the United States: An Intra- and Intergenerational Analysis." Political Psychology 23 (2): 303–324. Pew Research Center (2014). Political Polarization in the American Public. Pew Research Center. Joseph Bafumi & Robert Shapiro (2009). A New Partisan Voter. The Journal of Politics 71 (1): 1–24. Hitlin, P. (2013). The 2016 Presidential Media Primary Is Off to a Fast Start. Pew Research Center. Retrieved October 22, 2019. Kiley, J. (2015). A Clinton Candidacy: Voters' Early Impressions. Pew Research Center. Retrieved October 22, 2019. Texas Politics Project (2018). Ted Cruz Favorability (2018) - by Party ID. Retrieved October 29, 2019. Pew Research Center. (2012). Winning the Media Campaign. Pew Research Center. Pew Research Center (2012). Fewer Horserace Stories-and Fewer Positive Obama Stories-Than in 2008. Pew Research Center. Erikson, R. S., MacKuen, M. B., and Stimson, J. A. (2000). Bankers or Peasants Revisited: Economic Expectations and Presidential Approval. Electoral Studies 19: 295–312. Retrieved October 22, 2019. MacKuen, M. B., Erikson, R. S., & Stimson, J. A. (1989). Macropartisanship. American Political Science Review 83(4). 1125–1142. Stimson, J. A., Mackuen, M. B. & Erikson, R. S. (1995). Dynamic Representation. American Political Science Review 89 (3): 543–565 Licensing and Attribution CC LICENSED CONTENT, ORIGINAL Revision and Adaptation. Authored by: Daniel M. Regalado. License: CC BY: Attribution CC LICENSED CONTENT, SHARED PREVIOUSLY American Government. Authored by: OpenStax. Provided by: OpenStax; Rice University. Located at: http://cnx.org/contents/5bcc0e59-7345-421d-8507-a1e4608685e8@18.11. License: CC BY: Attribution. License Terms: Download for free at http://cnx.org/contents/5bcc0e59-7345-421d-8507-a1e4608685e8@18.11
oercommons
2025-03-18T00:35:06.259112
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66353/overview", "title": "Texas Government 2.0, Public Opinion and the Media in Texas", "author": null }
https://oercommons.org/courseware/lesson/66354/overview
Measuring Public Opinion in Texas Overview Measuring Public Opinion in Texas Learning Objectives By the end of this section, you will be able to: - Identify common ways to measure and quantify public opinion Introduction Public opinion polls tell us what proportion of a population has a specific viewpoint. They do not explain why respondents believe as they do or how to change their minds. This is the work of social scientists and scholars. Polls are simply a measurement tool that tells us how a population thinks and feels about any given topic. This can be useful in helping different cultures understand one another because it gives the people a chance to speak for themselves instead of letting only vocal media stars speak on behalf of all. Opinion polling gives people who do not usually have access to the media an opportunity to be heard. Taking a Poll Most public opinion polls aim to be accurate, but this is not an easy task. Political polling is a science. From design to implementation, polls are complex and require careful planning and care. Mitt Romney’s campaign polls are only a recent example of problems stemming from polling methods. Our history is littered with examples of polling companies producing results that incorrectly predicted public opinion due to poor survey design or bad polling methods. In 1936, Literary Digest continued its tradition of polling citizens to determine who would win the presidential election. The magazine sent opinion cards to people who had a subscription, a phone, or a car registration. Only some of the recipients sent back their cards. The result? Alf Landon was predicted to win 55.4 percent of the popular vote; in the end, he received only 38 percent. Franklin D. Roosevelt won another term, but the story demonstrates the need to be scientific in conducting polls. A few years later, Thomas Dewey lost the 1948 presidential election to Harry Truman, despite polls showing Dewey far ahead and Truman destined to lose. More recently, John Zogby, of Zogby Analytics, went public with his prediction that John Kerry would win the presidency against incumbent president George W. Bush in 2004, only to be proven wrong on election night. These are just a few cases, but each offers a different lesson. In 1948, pollsters did not poll up to the day of the election, relying on old numbers that did not include a late shift in voter opinion. Zogby’s polls did not represent likely voters and incorrectly predicted who would vote and for whom. These examples reinforce the need to use scientific methods when conducting polls, and to be cautious when reporting the results. Most polling companies employ statisticians and methodologists trained in conducting polls and analyzing data. A number of criteria must be met if a poll is to be completed scientifically. First, the methodologists identify the desired population, or group, of respondents they want to interview. For example, if the goal is to project who will win the presidency, citizens from across the United States should be interviewed. If we wish to understand how voters in Colorado will vote on a proposition, the population of respondents should only be Colorado residents. When surveying on elections or policy matters, many polling houses will interview only respondents who have a history of voting in previous elections, because these voters are more likely to go to the polls on Election Day. Politicians are more likely to be influenced by the opinions of proven voters than of everyday citizens. Once the desired population has been identified, the researchers will begin to build a sample that is both random and representative. A random sample consists of a limited number of people from the overall population, selected in such a way that each has an equal chance of being chosen. In the early years of polling, telephone numbers of potential respondents were arbitrarily selected from various areas to avoid regional bias. While landline phones allow polls to try to ensure randomness, the increasing use of cell phones makes this process difficult. Cell phones, and their numbers, are portable and move with the owner. To prevent errors, polls that include known cellular numbers may screen for zip codes and other geographic indicators to prevent regional bias. A representative sample consists of a group whose demographic distribution is similar to that of the overall population. For example, nearly 51 percent of the U.S. population is female. To match this demographic distribution of women, any poll intended to measure what most Americans think about an issue should survey a sample containing slightly more women than men. Pollsters try to interview a set number of citizens to create a reasonable sample of the population. This sample size will vary based on the size of the population being interviewed and the level of accuracy the pollster wishes to reach. If the poll is trying to reveal the opinion of a state or group, such as the opinion of Wisconsin voters about changes to the education system, the sample size may vary from five hundred to one thousand respondents and produce results with relatively low error. For a poll to predict what Americans think nationally, such as about the White House’s policy on greenhouse gases, the sample size should be larger. The sample size varies with each organization and institution due to the way the data are processed. Gallup often interviews only five hundred respondents, while Rasmussen Reports and Pew Research often interview one thousand to fifteen hundred respondents. Academic organizations, like the American National Election Studies, have interviews with over twenty-five-hundred respondents. A larger sample makes a poll more accurate, because it will have relatively fewer unusual responses and be more representative of the actual population. Pollsters do not interview more respondents than necessary, however. Increasing the number of respondents will increase the accuracy of the poll, but once the poll has enough respondents to be representative, increases in accuracy become minor and are not cost-effective. When the sample represents the actual population, the poll’s accuracy will be reflected in a lower margin of error. The margin of error is a number that states how far the poll results may be from the actual opinion of the total population of citizens. The lower the margin of error, the more predictive the poll. Large margins of error are problematic. For example, if a poll that claims Hillary Clinton is likely to win 30 percent of the vote in the 2016 New York Democratic primary has a margin of error of +/-6, it tells us that Clinton may receive as little as 24 percent of the vote (30 – 6) or as much as 36 percent (30 + 6). A lower margin of error is clearly desirable because it gives us the most precise picture of what people actually think or will do. With many polls out there, how do you know whether a poll is a good poll and accurately predicts what a group believes? First, look for the numbers. Polling companies include the margin of error, polling dates, number of respondents, and population sampled to show their scientific reliability. Was the poll recently taken? Is the question clear and unbiased? Was the number of respondents high enough to predict the population? Is the margin of error small? It is worth looking for this valuable information when you interpret poll results. While most polling agencies strive to create quality polls, other organizations want fast results and may prioritize immediate numbers over random and representative samples. For example, instant polling is often used by news networks to quickly assess how well candidates are performing in a debate. Technology and Polling The days of randomly walking neighborhoods and phone book cold-calling to interview random citizens are gone. Scientific polling has made interviewing more deliberate. Historically, many polls were conducted in person, yet this was expensive and yielded problematic results. In some situations and countries, face-to-face interviewing still exists. Exit polls, focus groups, and some public opinion polls occur in which the interviewer and respondents communicate in person. Exit polls are conducted in person, with an interviewer standing near a polling location and requesting information as voters leave the polls. Focus groups often select random respondents from local shopping places or pre-select respondents from Internet or phone surveys. The respondents show up to observe or discuss topics and are then surveyed. When organizations like Gallup or Roper decide to conduct face-to-face public opinion polls, however, it is a time-consuming and expensive process. The organization must randomly select households or polling locations within neighborhoods, making sure there is a representative household or location in each neighborhood. Then it must survey a representative number of neighborhoods from within a city. At a polling location, interviewers may have directions on how to randomly select voters of varied demographics. If the interviewer is looking to interview a person in a home, multiple attempts are made to reach a respondent if he or she does not answer. Gallup conducts face-to-face interviews in areas where less than 80 percent of the households in an area have phones because it gives a more representative sample. Most polling now occurs over the phone or through the Internet. Some companies, like Harris Interactive, maintain directories that include registered voters, consumers, or previously interviewed respondents. If pollsters need to interview a particular population, such as political party members or retirees of a specific pension fund, the company may purchase or access a list of phone numbers for that group. Other organizations, like Gallup, use random-digit-dialing (RDD), in which a computer randomly generates phone numbers with desired area codes. Using RDD allows the pollsters to include respondents who may have unlisted and cellular numbers. Questions about ZIP code or demographics may be asked early in the poll to allow the pollsters to determine which interviews to continue and which to end early. The interviewing process is also partly computerized. Many polls are now administered through computer-assisted telephone interviewing (CATI) or through robo-polls. A CATI system calls random telephone numbers until it reaches a live person and then connects the potential respondent with a trained interviewer. As the respondent provides answers, the interviewer enters them directly into the computer program. These polls may have some errors if the interviewer enters an incorrect answer. The polls may also have reliability issues if the interviewer goes off the script or answers respondents’ questions. Robo-polls are entirely computerized. A computer dials random or pre-programmed numbers and a prerecorded electronic voice administers the survey. The respondent listens to the question and possible answers and then presses numbers on the phone to enter responses. Proponents argue that respondents are more honest without an interviewer. However, these polls can suffer from error if the respondent does not use the correct keypad number to answer a question or misunderstands the question. Robo-polls may also have lower response rates because there is no live person to persuade the respondent to answer. There is also no way to prevent children from answering the survey. Lastly, the Telephone Consumer Protection Act (1991) made automated calls to cell phones illegal, which leaves a large population of potential respondents inaccessible to robo-polls. The latest challenges in telephone polling come from the shift in phone usage. A growing number of citizens, especially younger citizens, use only cell phones, and their phone numbers are no longer based on geographic areas. The millennial generation (currently aged 18–33) is also more likely to text than to answer an unknown call, so it is harder to interview this demographic group. Polling companies now must reach out to potential respondents using email and social media to ensure they have a representative group of respondents. Yet, the technology required to move to the Internet and handheld devices presents further problems. Web surveys must be designed to run on a varied number of browsers and handheld devices. Online polls cannot detect whether a person with multiple email accounts or social media profiles answers the same poll multiple times, nor can they tell when a respondent misrepresents demographics in the poll or on a social media profile used in a poll. These factors also make it more difficult to calculate response rates or achieve a representative sample. Yet, many companies are working with these difficulties, because it is necessary to reach younger demographics in order to provide accurate data. The Ins and Outs Of Polls Ever wonder what happens behind the polls? To find out, we posed a few questions to Scott Keeter, Director of Survey Research at Pew Research Center. Q: What are some of the most common misconceptions about polling? A: A couple of them recur frequently. The first is that it is just impossible for one thousand or fifteen hundred people in a survey sample to adequately represent a population of 250 million adults. But of course it is possible. Random sampling, which has been well understood for the past several decades, makes it possible. If you don’t trust small random samples, then ask your doctor to take all of your blood the next time you need a diagnostic test. The second misconception is that it is possible to get any result we want from a poll if we are willing to manipulate the wording sufficiently. While it is true that question wording can influence responses, it is not true that a poll can get any result it sets out to get. People aren’t stupid. They can tell if a question is highly biased and they won’t react well to it. Perhaps more important, the public can read the questions and know whether they are being loaded with words and phrases intended to push a respondent in a particular direction. That’s why it’s important to always look at the wording and the sequencing of questions in any poll. Q: How does your organization choose polling topics? A: We choose our topics in several ways. Most importantly, we keep up with developments in politics and public policy, and try to make our polls reflect relevant issues. Much of our research is driven by the news cycle and topics that we see arising in the near future. We also have a number of projects that we do regularly to provide a look at long-term trends in public opinion. For example, we’ve been asking a series of questions about political values since 1987, which has helped to document the rise of political polarization in the public. Another is a large (thirty-five thousand interviews) study of religious beliefs, behaviors, and affiliations among Americans. We released the first of these in 2007, and a second in 2015. Finally, we try to seize opportunities to make larger contributions on weighty issues when they arise. When the United States was on the verge of a big debate on immigration reform in 2006, we undertook a major survey of Americans’ attitudes about immigration and immigrants. In 2007, we conducted the first-ever nationally representative survey of Muslim Americans. Q: What is the average number of polls you oversee in a week? A: It depends a lot on the news cycle and the needs of our research groups. We almost always have a survey in progress, but sometimes there are two or three going on at once. At other times, we are more focused on analyzing data already collected or planning for future surveys. Q: Have you placed a poll in the field and had results that really surprised you? A: It’s rare to be surprised because we’ve learned a lot over the years about how people respond to questions. But here are some findings that jumped out to some of us in the past: In 2012, we conducted a survey of people who said their religion is “nothing in particular.” We asked them if they are “looking for a religion that would be right” for them, based on the expectation that many people without an affiliation—but who had not said they were atheists or agnostic—might be trying to find a religion that fit. Only 10 percent said that they were looking for the right religion. We—and many others—were surprised that public opinion about Muslims became more favorable after the 9/11 terrorist attacks. It’s possible that President Bush’s strong appeal to people not to blame Muslims in general for the attack had an effect on opinions. It’s also surprising that basic public attitudes about gun control (whether pro or anti) barely move after highly publicized mass shootings. Were you surprised by the results Scott Keeter reported in response to the interviewer’s final question? Why or why not? Conduct some research online to discover what degree plans or work experience would help a student find a job in a polling organization. | Problems in Polling For a number of reasons, polls may not produce accurate results. Two important factors a polling company faces are timing and human nature. Unless you conduct an exit poll during an election and interviewers stand at the polling places on Election Day to ask voters how they voted, there is always the possibility the poll results will be wrong. The simplest reason is that if there is time between the poll and Election Day, a citizen might change his or her mind, lie, or choose not to vote at all. Timing is very important during elections, because surprise events can shift enough opinions to change an election result. Of course, there are many other reasons why polls, even those not time-bound by elections or events, may be inaccurate. Polls begin with a list of carefully written questions. The questions need to be free of framing, meaning they should not be worded to lead respondents to a particular answer. For example, take two questions about presidential approval. Question 1 might ask, “Given the high rate of mass shootings in the U.S., do you approve of the job President Trump is doing?” Question 2 might ask, “Do you approve of the job President Trump is doing?” Both questions want to know how respondents perceive the president’s success, but the first question sets up a frame for the respondent to believe the President is performing poorly before answering. This is likely to make the respondent’s answer more negative. Similarly, the way we refer to an issue or concept can affect the way listeners perceive it. The phrase “estate tax” did not rally voters to protest the inheritance tax, but the phrase “death tax” sparked debate about whether taxing estates imposed a double tax on income. Many polling companies try to avoid leading questions, which lead respondents to select a predetermined answer, because they want to know what people really think. Some polls, however, have a different goal. Their questions are written to guarantee a specific outcome, perhaps to help a candidate get press coverage or gain momentum. These are called push polls. In the 2016 presidential primary race, MoveOn tried to encourage Senator Elizabeth Warren (D- MA) to enter the race for the Democratic nomination. Its poll used leading questions for what it termed an “informed ballot,” and, to show that Warren would do better than Hillary Clinton, it included ten positive statements about Warren before asking whether the respondent would vote for Clinton or Warren. The poll results were blasted by some in the media for being fake. Sometimes lack of knowledge affects the results of a poll. Respondents may not know that much about the polling topic but are unwilling to say, “I don’t know.” For this reason, surveys may contain a quiz with questions that determine whether the respondent knows enough about the situation to answer survey questions accurately. A poll to discover whether citizens support changes to the Affordable Care Act or Medicaid might first ask who these programs serve and how they are funded. Polls about territory seizure by the Islamic State (or ISIS) or Russia’s aid to rebels in Ukraine may include a set of questions to determine whether the respondent reads or hears any international news. Respondents who cannot answer correctly may be excluded from the poll, or their answers may be separated from the others. People may also feel social pressure to answer questions in accordance with the norms of their area or peers. If they are embarrassed to admit how they would vote, they may lie to the interviewer. In the 1982 governor’s race in California, Los Angeles Mayor Tom Bradley was far ahead in the polls, yet on Election Day he lost. This result was nicknamed the Bradley effect, a theory based on observed discrepancies between voter opinion polls and election outcomes in government elections where a white candidate and a non-white candidate run against each other. The theory proposes that some voters who intend to vote for the white candidate would nonetheless tell pollsters that they are undecided or likely to vote for the non-white candidate. In this case, voters who answered the poll succumbed to social desirability bias, and were afraid to admit they would not vote for a black man because it would appear politically incorrect and racist. In 2010, Proposition 19, which would have legalized and taxed marijuana in California, met with a new version of the Bradley effect. Nate Silver, a political blogger, noticed that polls on the marijuana proposition were inconsistent, sometimes showing the proposition would pass and other times showing it would fail. Silver compared the polls and the way they were administered because some polling companies used an interviewer and some used robo-calling. He then proposed that voters speaking with a live interviewer gave the socially acceptable answer that they would vote against Proposition 19, while voters interviewed by a computer felt free to be honest. Interviewer demographics may also affect respondents’ answers. African Americans, for example, may give different responses to interviewers who are white than to interviewers who are black. Push Polls One of the newer byproducts of polling is the creation of push polls, which consist of political campaign information presented as polls. A respondent is called and asked a series of questions about his or her position or candidate selections. If the respondent’s answers are for the wrong candidate, the next questions will give negative information about the candidate in an effort to change the voter’s mind. In 2014, a fracking ban was placed on the ballot in Denton, Texas. Fracking, which includes injecting pressurized water into drilled wells, helps energy companies collect additional gas from the earth. It is controversial, with opponents arguing it causes water pollution, sound pollution, and earthquakes. During the campaign, a number of local voters received a call that polled them on how they planned to vote on the proposed fracking ban. If the respondent was unsure about or planned to vote for the ban, the questions shifted to provide negative information about the organizations proposing the ban. One question asked, “If you knew the following, would it change your vote . . . two Texas railroad commissioners, the state agency that oversees oil and gas in Texas, have raised concerns about Russia’s involvement in the anti-fracking efforts in the U.S.?” The question played upon voter fears about Russia and international instability in order to convince them to vote against the fracking ban. These techniques are not limited to issue votes; candidates have used them to attack their opponents. The hope is that voters will think the poll is legitimate and believe the negative information provided by a “neutral” source. Check Your Knowledge Check your knowledge of this section by taking the quiz linked below. The quiz will open in a new browser window or tab. Polling in Texas Most polling is conducted at the national level–there are far fewer polls conducted at the state level. In Texas we’re fortunate to have the University of Texas/Texas Tribune Poll. Beginning in 2008, the Texas Politics Project at the University of Texas (UT), under the direction of James Henson and Joshua Blank, has conducted three to four statewide public opinion polls each year to assess the opinions of registered voters on upcoming elections, public policy, and attitudes towards politics, politicians, and government. In 2009, UT partnered with the Texas Tribune, and continued to regularly measure public opinion in Texas, making the data freely available to students, researchers, and the general public in their data archive. To see what Texans are thinking about politics, or to do some of your own analysis, please visit their polling page where you’ll find a wealth of information on public opinion in Texas. References and Further Reading Gallup. 2015. “Gallup Daily: Obama Job Approval.” Gallup. June 6, 2015. (February 17, 2016); Rasmussen Reports. 2015. “Daily Presidential Tracking Poll.” Rasmussen Reports, June 6, 2015. (February 17, 2016); Roper Center. 2015. “Obama Presidential Approval.” Roper Center. June 6, 2015. V. O. Key, Jr. (1966). The Responsible Electorate: Rationality in Presidential Voting, 1936– 1960. Harvard University: Belknap Press. Arthur Evans, “Predict Landon Electoral Vote to be 315 to 350,” Chicago Tribune, 18 October 1936. United States Census Bureau. 2012. “Age and Sex Composition in the United States: 2012.” United States Census Bureau. Rasmussen Reports. 2015. “Daily Presidential Tracking Poll” Rasmussen Reports. September 27, 2015. (February 17, 2016); Pew Research Center. 2015. “Sampling” Pew Research Center. (February 17, 2016). American National Election Studies (ANES) Data Center. 2016 Time Series Study. Retrieved September 5, 2019. Michael W. Link and Robert W. Oldendick. 1997. "Good” Polls / “Bad” Polls—How Can You Tell? Ten Tips for Consumers of Survey Research. South Carolina Policy Forum; Pew Research Center (2015). Sampling Retrieved September 5, 2019. Cornell University (2015). Polling Fundamentals – Sampling Roper Center for Public Opinion Research. Retrieved September 5, 2019. Gallup. How Does the Gallup World Poll Work? Retrieved September 5, 2019. Gallup. Does Gallup Call Cellphones? Retrieved September 5, 2019. Mark Blumenthal, “The Case for Robo-Pollsters: Automated Interviewers Have Their Drawbacks, But Fewer Than Their Critics Suggest,” National Journal, 14 September 2009. Mark Blumenthal, “Is Polling As We Know It Doomed?” National Journal, 10 August 2009. Frank Luntz. 2007. Words That Work: It’s Not What You Say, It’s What People Hear. New York: Hyperion. Aaron Blake, “This terrible polls shows Elizabeth Warren beating Hillary Clinton,” Washington Post, 11 February 2015. Nate Silver (2010). “The Broadus Effect? Social Desirability Bias and California Proposition 19.” FiveThirtyEightPolitics. Retrieved October 19, 2019. Gary Langer (November 8, 1989). "Election Poll Problems: Did Some Voters Lie?" Associated Press. Retrieved October 28, 2019. Elder, J. (May 16, 2007). Will There Be an 'Obama Effect?’The New York Times. Retrieved October 28, 2019. Davis, D. (1997). The Direction of Race of Interviewer Effects among African-Americans: Donning the Black Mask American Journal of Political Science, 41(1). 309–322. Retrieved October 22, 2019. Kate Sheppard (2014, July 16). “Top Texas Regulator: Could Russia be Behind City’s Proposed Fracking Ban?” Huffington Post. Jim Henson & Joshua Blank (2019, June 21). The Public Opinion Underpinning of Texas GOP Leaders' Pivot Back to Immigration and Border Security Texas Politics Project - University of Texas at Austin Licensing and Attribution CC LICENSED CONTENT, ORIGINAL Revision and Adaptation. Authored by: Daniel M. Regalado. License: CC BY: Attribution CC LICENSED CONTENT, SHARED PREVIOUSLY American Government. Authored by: OpenStax. Provided by: OpenStax; Rice University. Located at: http://cnx.org/contents/5bcc0e59-7345-421d-8507- a1e4608685e8@18.11 License: CC BY: Attribution. License Terms: Download for free at http://cnx.org/contents/5bcc0e59-7345-421d-8507-a1e4608685e8@18.11
oercommons
2025-03-18T00:35:06.316398
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66354/overview", "title": "Texas Government 2.0, Public Opinion and the Media in Texas", "author": null }
https://oercommons.org/courseware/lesson/66355/overview
The Media Overview The Media Learning Objective By the end of this section, you will be able to: - Explain what the media are and how they are organized Introduction Ours is an exploding media system. What started as print journalism was subsequently supplemented by radio coverage, then network television, followed by cable television. Now, with the addition of the Internet, blogs and social media—a set of applications or web platforms that allow users to immediately communicate with one another—give citizens a wide variety of sources for instant news of all kinds. The Internet also allows citizens to initiate public discussion by uploading images and video for viewing, such as videos documenting interactions between citizens and the police, for example. Provided we are connected digitally, we have a bewildering amount of choices for finding information about the world. In fact, some might say that compared to the tranquil days of the 1970s, when we might read the morning newspaper over breakfast and take in the network news at night, there are now too many choices in today’s increasingly complex world of information. This reality may make the news media all the more important to structuring and shaping narratives about U.S. politics. Or the proliferation of competing information sources like blogs and social media may actually weaken the power of the news media relative to the days when news media monopolized our attention. Media Basics The term media defines a number of different communication formats from television media, which share information through broadcast airwaves, to print media, which rely on printed documents. The collection of all forms of media that communicate information to the general public is called mass media, including television, print, radio, and Internet. One of the primary reasons citizens turn to the media is for news. We expect the media to cover important political and social events and information in a concise and neutral manner. To accomplish its work, the media employs a number of people in varied positions. Journalists and reporters are responsible for uncovering news stories by keeping an eye on areas of public interest, like politics, business, and sports. Once a journalist has a lead or a possible idea for a story, he or she researches background information and interviews people to create a complete and balanced account. Editors work in the background of the newsroom, assigning stories, approving articles or packages, and editing content for accuracy and clarity. Publishers are people or companies that own and produce print or digital media. They oversee both the content and finances of the publication, ensuring the organization turns a profit and creates a high-quality product to distribute to consumers. Producers oversee the production and finances of visual media, like television, radio, and film. The work of the news media differs from public relations, which is communication carried out to improve the image of companies, organizations, or candidates for office. Public relations is not a neutral information form. While journalists write stories to inform the public, a public relations spokesperson is paid to help an individual or organization get positive press. Public relations materials normally appear as press releases or paid advertisements in newspapers and other media outlets. Some less reputable publications, however, publish paid articles under the news banner, blurring the line between journalism and public relations. Media Types and Functions Each form of media has its own complexities and is used by different demographics. Millennials (currently aged 18–33) are more likely to get news and information from social media, such as YouTube, Twitter, and Facebook, while baby boomers (currently aged 50–68) are most likely to get their news from television, either national broadcasts or local news. Television alone offers viewers a variety of formats. Programming may be scripted, like dramas or comedies. It may be unscripted, like game shows or reality programs, or informative, such as news programming. Although most programs are created by a television production company, national networks—like CBS or NBC—purchase the rights to programs they distribute to local stations across the United States. Most local stations are affiliated with a national network corporation, and they broadcast national network programming to their local viewers. Before the existence of cable and fiber optics, networks needed to own local affiliates to have access to the local station’s transmission towers. Towers have a limited radius, so each network needed an affiliate in each major city to reach viewers. While cable technology has lessened networks’ dependence on aerial signals, some viewers still use antennas and receivers to view programming broadcast from local towers. Affiliates, by agreement with the networks, give priority to network news and other programming chosen by the affiliate’s national media corporation. Local affiliate stations are told when to air programs or commercials, and they diverge only to inform the public about a local or national emergency. For example, ABC affiliates broadcast the popular television show Once Upon a Time at a specific time on a specific day. Should a fire threaten homes and businesses in a local area, the affiliate might preempt it to update citizens on the fire’s dangers and return to regularly scheduled programming after the danger has ended. Most affiliate stations will show local news before and after network programming to inform local viewers of events and issues. Network news has a national focus on politics, international events, the economy, and more. Local news, on the other hand, is likely to focus on matters close to home, such as regional business, crime, sports, and weather. The NBC Nightly News, for example, covers presidential campaigns and the White House or skirmishes between North Korea and South Korea, while the NBC affiliate in Los Angeles (KNBC-TV) and the NBC affiliate in Dallas (KXAS-TV) report on the governor’s activities or weekend festivals in the region. Cable programming offers national networks a second method to directly reach local viewers. As the name implies, cable stations transmit programming directly to a local cable company hub, which then sends the signals to homes through coaxial or fiber optic cables. Because cable does not broadcast programming through the airwaves, cable networks can operate across the nation directly without local affiliates. Instead they purchase broadcasting rights for the cable stations they believe their viewers want. For this reason, cable networks often specialize in different types of programming. The Cable News Network (CNN) was the first news station to take advantage of this specialized format, creating a 24-hour news station with live coverage and interview programs. Other news stations quickly followed, such as MSNBC and FOX News. A viewer might tune in to Nickelodeon and catch family programs and movies or watch ESPN to catch up with the latest baseball or basketball scores. The Cable-Satellite Public Affairs Network, known better as C-SPAN, now has three channels covering Congress, the president, the courts, and matters of public interest. Cable and satellite providers also offer on-demand programming for most stations. Citizens can purchase cable, satellite, and Internet subscription services (like Netflix) to find programs to watch instantly, without being tied to a schedule. Initially, on-demand programming was limited to rebroadcasting old content and was commercial-free. Yet many networks and programs now allow their new programming to be aired within a day or two of its initial broadcast. In return they often add commercials the user cannot fast-forward or avoid. Thus networks expect advertising revenues to increase. The on-demand nature of the Internet has created many opportunities for news outlets. While early media providers were those who could pay the high cost of printing or broadcasting, modern media require just a URL and ample server space. The ease of online publication has made it possible for more niche media outlets to form. The websites of the New York Times and other newspapers often focus on matters affecting the United States, while channels like BBC America present world news. FOX News presents political commentary and news in a conservative vein, while the Internet site Daily Kos offers a liberal perspective on the news. Politico.com () is perhaps the leader in niche journalism. Unfortunately, the proliferation of online news has also increased the amount of poorly written material with little editorial oversight, and readers must be cautious when reading Internet news sources. Sites like Buzzfeed allow members to post articles without review by an editorial board, leading to articles of varied quality and accuracy. The Internet has also made publication speed a consideration for professional journalists. No news outlet wants to be the last to break a story, and the rush to publication often leads to typographical and factual errors. Even large news outlets, like the Associated Press, have published articles with errors in their haste to get a story out. The Internet also facilitates the flow of information through social media, which allows users to instantly communicate with one another and share with audiences that can grow exponentially. Facebook and Twitter have millions of daily users. Social media changes more rapidly than the other media formats. While people in many different age groups use sites like Facebook, Twitter, and YouTube, other sites like Snapchat and Yik Yak appeal mostly to younger users. The platforms also serve different functions. Tumblr and Reddit facilitate discussion that is topic-based and controversial, while Instagram is mostly social. A growing number of these sites also allow users to comment anonymously, leading to increases in threats and abuse. The site 4chan, for example, was linked to the 2015 shooting at an Oregon community college. Regardless of where we get our information, the various media avenues available today, versus years ago, make it much easier for everyone to be engaged. The question is: Who controls the media we rely on? Most media are controlled by a limited number of conglomerates. A conglomerate is a corporation made up of a number of companies, organizations, and media networks. In the 1980s, more than fifty companies owned the majority of television and radio stations and networks. Now, only six conglomerates control most of the broadcast media in the United States: CBS Corporation, Comcast, Time Warner, 21st Century Fox (formerly News Corporation), Viacom, and The Walt Disney Company. The Walt Disney Company, for example, owns the ABC Television Network, ESPN, A&E, and Lifetime, in addition to the Disney Channel. Viacom owns BET, Comedy Central, MTV, Nickelodeon, and Vh2. Time Warner owns Cartoon Network, CNN, HBO, and TNT, among others. While each of these networks has its own programming, in the end, the conglomerate can make a policy that affects all stations and programming under its control. Conglomerates can create a monopoly on information by controlling a sector of a market. When a media conglomerate has policies or restrictions, they will apply to all stations or outlets under its ownership, potentially limiting the information citizens receive. Conglomerate ownership also creates circumstances in which censorship may occur. iHeartMedia (formerly Clear Channel Media) owns music, radio, and billboards throughout the United States, and in 2010, the company refused to run several Billboard ads for the St. Pete Pride Festival and Promenade in St. Petersburg, Florida. The festival organizers said the content of two ads, a picture of same-sex couples in close contact with one another, was the reason the ads were not run. Because iHeartMedia owns most of the billboards in the area, this limitation was problematic for the festival and decreased awareness of the event. Those in charge of the festival viewed the refusal as censorship. Newspapers too have experienced the pattern of concentrated ownership. Gannett Company, while also owning television media, holds a large number of newspapers and news magazines in its control. Many of these were acquired quietly, without public notice or discussion. Gannett’s 2013 acquisition of publishing giant A.H. Belo Corporation caused some concern and news coverage, however. The sale would have allowed Gannett to own both an NBC and a CBS affiliate in St. Louis, Missouri, giving it control over programming and advertising rates for two competing stations. The U.S. Department of Justice required Gannett to sell the station owned by Belo to ensure market competition and multi- ownership in St. Louis. These changes in the format and ownership of media raise the question whether the media still operate as an independent source of information. Is it possible that corporations and CEOs now control the information flow, making profit more important than the impartial delivery of information? The reality is that media outlets, whether newspaper, television, radio, or Internet, are businesses. They have expenses and must raise revenues. Yet at the same time, we expect the media to entertain, inform, and alert us without bias. They must provide some public services, while following laws and regulations. Reconciling these goals may not always be possible. The media exist to fill a number of functions. Whether the medium is a newspaper, a radio, or a television newscast, a corporation behind the scenes must bring in revenue and pay for the cost of the product. Revenue comes from advertising and sponsors, like McDonald’s, Ford Motor Company, and other large corporations. But corporations will not pay for advertising if there are no viewers or readers. So all programs and publications need to entertain, inform, or interest the public and maintain a steady stream of consumers. In the end, what attracts viewers and advertisers is what survives. The media are also watchdogs of society and of public officials. Some refer to the media as the fourth estate, with the branches of government being the first three estates and the media equally participating as the fourth. This role helps maintain democracy and keeps the government accountable for its actions, even if a branch of the government is reluctant to open itself to public scrutiny. As much as social scientists would like citizens to be informed and involved in politics and events, the reality is that we are not. So the media, especially journalists, keep an eye on what is happening and sounds an alarm when the public needs to pay attention. The media also engages in agenda setting, which is the act of choosing which issues or topics deserve public discussion. For example, in the early 1980s, famine in Ethiopia drew worldwide attention, which resulted in increased charitable giving to the country. Yet the famine had been going on for a long time before it was discovered by western media. Even after the discovery, it took video footage to gain the attention of the British and U.S. populations and start the aid flowing.Today, numerous examples of agenda setting show how important the media are when trying to prevent further emergencies or humanitarian crises. In the spring of 2015, when the Dominican Republic was preparing to exile Haitians and undocumented (or under-documented) residents, major U.S. news outlets remained silent. However, once the story had been covered several times by Al Jazeera, a state-funded broadcast company based in Qatar, ABC, the New York Times, and other network outlets followed. With major network coverage came public pressure for the U.S. government to act on behalf of the Haitians. Before the Internet, traditional media determined whether citizen photographs or video footage would become “news.” In 1991, a private citizen’s camcorder footage showed four police officers beating an African American motorist named Rodney King in Los Angeles. After appearing on local independent television station, KTLA-TV, and then the national news, the event began a national discussion on police brutality and ignited riots in Los Angeles. The agenda-setting power of traditional media has begun to be appropriated by social media and smartphones, however. Tumblr, Facebook, YouTube, and other Internet sites allow witnesses to instantly upload images and accounts of events and forward the link to friends. Some uploads go viral and attract the attention of the mainstream media, but large network newscasts and major newspapers are still more powerful at initiating or changing a discussion. The media also promote the public good by offering a platform for public debate and improving citizen awareness. Network news informs the electorate about national issues, elections, and international news. The New York Times, Los Angeles Times, NBC Nightly News, and other outlets make sure voters can easily find out what issues affect the nation. Is terrorism on the rise? Is the dollar weakening? The network news hosts national debates during presidential elections, broadcasts major presidential addresses, and interviews political leaders during times of crisis. Cable news networks now provide coverage of all these topics as well. Local news has a larger job, despite small budgets and fewer resources. Local government and local economic policy have a strong and immediate effect on citizens. Is the city government planning on changing property tax rates? Will the school district change the way Common Core tests are administered? When and where is the next town hall meeting or public forum to be held? Local and social media provide a forum for protest and discussion of issues that matter to the community. While journalists reporting the news try to present information in an unbiased fashion, sometimes the public seeks opinion and analysis of complicated issues that affect various populations differently, like healthcare reform and the Affordable Care Act. This type of coverage may come in the form of editorials, commentaries, Op-Ed columns, and blogs. These forums allow the editorial staff and informed columnists to express a personal belief and attempt to persuade. If opinion writers are trusted by the public, they have influence. Walter Cronkite, reporting from Vietnam, had a loyal following. In a broadcast following the Tet Offensive in 1968, Cronkite expressed concern that the United States was mired in a conflict that would end in a stalemate. His coverage was based on opinion after viewing the war from the ground. Although the number of people supporting the war had dwindled by this time, Cronkite’s commentary bolstered opposition. Like editorials, commentaries contain opinion and are often written by specialists in a field. Larry Sabato, a prominent political science professor at the University of Virginia, occasionally writes his thoughts for the New York Times. These pieces are based on his expertise in politics and elections. Blogs offer more personalized coverage, addressing specific concerns and perspectives for a limited group of readers. Nate Silver’s blog, FiveThirtyEight, focuses on elections and politics. Check Your Knowledge: Media Types Check your knowledge of this section by taking the quiz linked below. The quiz will open in a new browser window or tab. Check Your Knowledge: Media Functions Check your knowledge of this section by taking the quiz linked below. The quiz will open in a new browser window or tab. Chapter 14.4: Media Functions Quiz Media Effects and Bias Concerns about the effects of media on consumers and the existence and extent of media bias go back to the 1920s. Reporter and commentator Walter Lippmann noted that citizens have limited personal experience with government and the world and posited that the media, through their stories, place ideas in citizens’ minds. These ideas become part of the citizens’ frame of reference and affect their decisions. Lippmann’s statements led to the hypodermic theory, which argues that information is “shot” into the receiver’s mind and readily accepted. Yet studies in the 1930s and 1940s found that information was transmitted in two steps, with one person reading the news and then sharing the information with friends. People listened to their friends, but not to those with whom they disagreed. The newspaper’s effect was thus diminished through conversation. This discovery led to the minimal effects theory, which argues the media have little effect on citizens and voters. By the 1970s, a new idea, the cultivation theory, hypothesized that media develop a person’s view of the world by presenting a perceived reality. What we see on a regular basis is our reality. Media can then set norms for readers and viewers by choosing what is covered or discussed. In the end, the consensus among observers is that media have some effect, even if the effect is subtle. This raises the question of how the media, even general newscasts, can affect citizens. One of the ways is through framing: the creation of a narrative, or context, for a news story. The news often uses frames to place a story in a context so the reader understands its importance or relevance. Yet, at the same time, framing affects the way the reader or viewer processes the story. Episodic framing occurs when a story focuses on isolated details or specifics rather than looking broadly at a whole issue. Thematic framing takes a broad look at an issue and skips numbers or details. It looks at how the issue has changed over a long period of time and what has led to it. For example, a large, urban city is dealing with the problem of an increasing homeless population, and the city has suggested ways to improve the situation. If journalists focus on the immediate statistics, report the current percentage of homeless people, interview a few, and look at the city’s current investment in a homeless shelter, the coverage is episodic. If they look at homelessness as a problem increasing everywhere, examine the reasons people become homeless, and discuss the trends in cities’ attempts to solve the problem, the coverage is thematic. Episodic frames may create more sympathy, while a thematic frame may leave the reader or viewer emotionally disconnected and less sympathetic. Framing can also affect the way we see race, socioeconomics, or other generalizations. For this reason, it is linked to priming: when media coverage predisposes the viewer or reader to a particular perspective on a subject or issue. If a newspaper article focuses on unemployment, struggling industries, and jobs moving overseas, the reader will have a negative opinion about the economy. If then asked whether he or she approves of the president’s job performance, the reader is primed to say no. Readers and viewers are able to fight priming effects if they are aware of them or have prior information about the subject. | Framing | | For a closer look at framing and how it influences voters, read “How the Media Frames Political Issues” a review essay by Scott London. | Finally, media information presented as fact can contain covert or overt political material. Covert content is political information provided under the pretense that it is neutral. A magazine might run a story on climate change by interviewing representatives of only one side of the policy debate and downplaying the opposing view, all without acknowledging the one-sided nature of its coverage. In contrast, when the writer or publication makes clear to the reader or viewer that the information offers only one side of the political debate, the political message is overt content. Political commentators like Rush Limbaugh and publications like Mother Jones openly state their ideological viewpoints. While such overt political content may be offensive or annoying to a reader or viewer, all are offered the choice whether to be exposed to the material. Coverage Effects on Governance and Campaigns When it is spotty, the media’s coverage of campaigns and government can sometimes affect the way government operates and the success of candidates. In 1972, for instance, the McGovern-Fraser reforms created a voter-controlled primary system, so party leaders no longer pick the presidential candidates. Now the media are seen as kingmakers and play a strong role in influencing who will become the Democratic and Republican nominees in presidential elections. They can discuss the candidates’ messages, vet their credentials, carry sound bites of their speeches, and conduct interviews. The candidates with the most media coverage build momentum and do well in the first few primaries and caucuses. This, in turn, leads to more media coverage, more momentum, and eventually a winning candidate. Thus, candidates need the media. In the 1980s, campaigns learned that tight control on candidate information created more favorable edia coverage. In the presidential election of 1984, candidates Ronald Reagan and George H. W. Bush began using an issue-of-the-day strategy, providing quotes and material on only one topic each day. This strategy limited what journalists could cover because they had only limited quotes and sound bites to use in their reports. In 1992, both Bush’s and Bill Clinton’s campaigns maintained their carefully drawn candidate images by also limiting photographers and television journalists to photo opportunities at rallies and campaign venues. The constant control of the media became known as the “bubble,” and journalists were less effective when they were in the campaign’s bubble. Reporters complained this coverage was campaign advertising rather than journalism, and a new model emerged with the 1996 election. Campaign coverage now focuses on the spectacle of the season, rather than providing information about the candidates. Colorful personalities, strange comments, lapse of memories, and embarrassing revelations are more likely to get air time than the candidates’ issue positions. Candidate Donald Trump may be the best example of shallower press coverage of a presidential election. Some argue that newspapers and news programs are limiting the space they allot to discussion of the campaigns. Others argue that citizens want to see updates on the race and electoral drama, not boring issue positions or substantive reporting. It may also be that journalists have tired of the information games played by politicians and have taken back control of the news cycles. All these factors have likely led to the shallow press coverage we see today, sometimes dubbed pack journalism because journalists follow one another rather than digging for their own stories. Television news discusses the strategies and blunders of the election, with colorful examples. Newspapers focus on polls. In an analysis of the 2012 election, Pew Research found that 64 percent of stories and coverage focused on campaign strategy. Only 9 percent covered domestic issue positions; 6 percent covered the candidates' public records; and, 1 percent covered their foreign policy positions. For better or worse, coverage of the candidates' statements get less air time on radio and television, and sound bites, or clips, of their speeches have become even shorter. In 1968, the average sound bite from Richard Nixon was 42.3 seconds, while a recent study of television coverage found that sound bites had decreased to only eight seconds in the 2004 election. The clips chosen to air were attacks on opponents 40 percent of the time. Only 30 percent contained information about the candidate's issues or events. The study also found the news showed images of the candidates, but for an average of only twenty-five seconds while the newscaster discussed the stories. This study supports the argument that shrinking sound bites are a way for journalists to control the story and add their own analysis rather than just report on it. Candidates are given a few minutes to try to argue their side of an issue, but some say television focuses on the argument rather than on information. In 2004, Jon Stewart of Comedy Central’s The Daily Show began attacking the CNN program Crossfire for being theater, saying the hosts engaged in reactionary and partisan arguing rather than true debating. Some of Stewart’s criticisms resonated, even with host Paul Begala, and Crossfire was later pulled from the air. The media’s discussion of campaigns has also grown negative. Although biased campaign coverage dates back to the period of the partisan press, the increase in the number of cable news stations has made the problem more visible. Stations like FOX News and MSNBC are overt in their use of bias in framing stories. During the 2012 campaign, seventy-one of seventy-four MSNBC stories about Mitt Romney were highly negative, while FOX News’ coverage of Obama had forty-six out of fifty-two stories with negative information. The major networks—ABC, CBS, and NBC—were somewhat more balanced, yet the overall coverage of both candidates tended to be negative. Once candidates are in office, the chore of governing begins, with the added weight of media attention. Historically, if presidents were unhappy with their press coverage, they used personal and professional means to change its tone. Franklin D. Roosevelt, for example, was able to keep journalists from printing stories through gentleman’s agreements, loyalty, and the provision of additional information, sometimes off the record. The journalists then wrote positive stories, hoping to keep the president as a source. John F. Kennedy hosted press conferences twice a month and opened the floor for questions from journalists, in an effort to keep press coverage positive. When presidents and other members of the White House are not forthcoming with information, journalists must press for answers. Dan Rather, a journalist for CBS, regularly sparred with presidents in an effort to get information. When Rather interviewed Richard Nixon about Vietnam and Watergate, Nixon was hostile and uncomfortable. In a 1988 interview with then-vice president George H. W. Bush, Bush accused Rather of being argumentative about the possible cover-up of a secret arms sale with Iran: | Rather: I don’t want to be argumentative, Mr. Vice President. Bush: You do, Dan. Rather: No—no, sir, I don’t. Bush: This is not a great night, because I want to talk about why I want to be president, why those 41 percent of the people are supporting me. And I don’t think it’s fair to judge my whole career by a rehash of Iran. How would you like it if I judged your career by those seven minutes when you walked off the set in New York? | Cabinet secretaries and other appointees also talk with the press, sometimes making for conflicting messages. The creation of the position of press secretary and the White House Office of Communications both stemmed from the need to send a cohesive message from the executive branch. Currently, the White House controls the information coming from the executive branch through the Office of Communications and decides who will meet with the press and what information will be given. But stories about the president often examine personality, or the president’s ability to lead the country, deal with Congress, or respond to national and international events. They are less likely to cover the president’s policies or agendas without a lot of effort on the president’s behalf. When Obama first entered office in 2009, journalists focused on his battles with Congress, critiquing his leadership style and inability to work with Representative Nancy Pelosi, then Speaker of the House. To gain attention for his policies, specifically the American Recovery and Reinvestment Act (ARRA), Obama began traveling the United States to draw the media away from Congress and encourage discussion of his economic stimulus package. Once the ARRA had been passed, Obama began travelling again, speaking locally about why the country needed the Affordable Care Act and guiding media coverage to promote support for the act. Congressional representatives have a harder time attracting media attention for their policies. House and Senate members who use the media well, either to help their party or to show expertise in an area, may increase their power within Congress, which helps them bargain for fellow legislators’ votes. Senators and high-ranking House members may also be invited to appear on cable news programs as guests, where they may gain some media support for their policies. Yet, overall, because there are so many members of Congress, and therefore so many agendas, it is harder for individual representatives to draw media coverage. It is less clear, however, whether media coverage of an issue leads Congress to make policy, or whether congressional policymaking leads the media to cover policy. In the 1970s, Congress investigated ways to stem the number of drug-induced deaths and crimes. As congressional meetings dramatically increased, the press was slow to cover the topic. The number of hearings was at its highest from 1970 to 1982, yet media coverage did not rise to the same level until 1984. Subsequent hearings and coverage led to national policies like DARE and First Lady Nancy Reagan’s “Just Say No” campaign. Later studies of the media’s effect on both the president and Congress report that the media has a stronger agenda-setting effect on the president than on Congress. What the media choose to cover affects what the president thinks is important to voters, and these issues were often of national importance. The media’s effect on Congress was limited, however, and mostly extended to local issues like education or child and elder abuse. If the media are discussing a topic, chances are a member of Congress has already submitted a relevant bill, and it is waiting in committee. Coverage Effects on Society The media choose what they want to discuss. This agenda setting creates a reality for voters and politicians that affects the way people think, act, and vote. Even if the crime rate is going down, for instance, citizens accustomed to reading stories about assault and other offenses still perceive crime to be an issue. Studies have also found that the media’s portrayal of race is flawed, especially in coverage of crime and poverty. One study revealed that local news shows were more likely to show pictures of criminals when they were African American, so they overrepresented blacks as perpetrators and whites as victims. A second study found a similar pattern in which Latinos were underrepresented as victims of crime and as police officers, while whites were overrepresented as both.Voters were thus more likely to assume that most criminals are black and most victims and police officers are white, even though the numbers do not support those assumptions. Network news similarly misrepresents the victims of poverty by using more images of blacks than whites in its segments. Viewers in a study were left believing African Americans were the majority of the unemployed and poor, rather than seeing the problem as one faced by many races. The misrepresentation of race is not limited to news coverage, however. A study of images printed in national magazines, like Time and Newsweek, found they also misrepresented race and poverty. The magazines were more likely to show images of young African Americans when discussing poverty and excluded the elderly and the young, as well as whites and Latinos, which is the true picture of poverty. Racial framing, even if unintentional, affects perceptions and policies. If viewers are continually presented with images of African Americans as criminals, there is an increased chance they will perceive members of this group as violent or aggressive. The perception that most recipients of welfare are working-age African Americans may have led some citizens to vote for candidates who promised to reduce welfare benefits. When survey respondents were shown a story of a white unemployed individual, 71 percent listed unemployment as one of the top three problems facing the United States while only 53 percent did so if the story was about an unemployed African American. Word choice may also have a priming effect. News organizations like the Los Angeles Times and the Associated Press no longer use the phrase “illegal immigrant” to describe undocumented residents. This may be due to the desire to create a “sympathetic” frame for the immigration situation rather than a “threat” frame. Media coverage of women has been similarly biased. Most journalists in the early 1900s were male, and women’s issues were not part of the newsroom discussion. As journalist Kay Mills put it, the women’s movement of the 1960s and 1970s was about raising awareness of the problems of equality, but writing about rallies “was like trying to nail Jell-O to the wall. Most politicians, business leaders, and other authority figures were male, and editors’ reactions to the stories were lukewarm. The lack of women in the newsroom, politics, and corporate leadership encouraged silence. In 1976, journalist Barbara Walters became the first female co-anchor on a network news show, The ABC Evening News. She was met with great hostility from her co-anchor Harry Reasoner and received critical coverage from the press. On newspaper staffs, women reported having to fight for assignments to well-published beats, or to be assigned areas or topics, such as the economy or politics, that were normally reserved for male journalists. Once female journalists held these assignments, they feared writing about women’s issues. Would it make them appear weak? Would they be taken from their coveted beats? This apprehension allowed poor coverage of women and the women’s movement to continue until women were better represented as journalists and as editors. Strength of numbers allowed them to be confident when covering issues like health care, childcare, and education. The media’s historically uneven coverage of women continues in its treatment of female candidates. Early coverage was sparse. The stories that did appear often discussed the candidate’s viability, or ability to win, rather than her stand on the issues. Women were seen as a novelty rather than as serious contenders who needed to be vetted and discussed. Modern media coverage has changed slightly. One study found that female candidates receive more favorable coverage than in prior generations, especially if they are incumbents. Yet a different study found that while there was increased coverage for female candidates, it was often negative. And it did not include Latina candidates. Without coverage, they are less likely to win. The historically negative media coverage of female candidates has had another concrete effect: Women are less likely than men to run for office. One common reason is the effect negative media coverage has on families. Many women do not wish to expose their children or spouses to criticism. In 2008, the nomination of Sarah Palin as Republican candidate John McCain’s running mate validated this concern. Some articles focused on her qualifications to be a potential future president or her record on the issues. But others questioned whether she had the right to run for office, given she had young children, one of whom has developmental disabilities. Her daughter, Bristol, was criticized for becoming pregnant while unmarried. Her husband was called cheap for failing to buy her a high-priced wedding ring. Even when candidates ask that children and families be off-limits, the press rarely honors the requests. So women with young children may wait until their children are grown before running for office, if they choose to run at all. Link to Learning The Center for American Women in Politics researches the treatment women receive from both government and the media, and they share the data with the public. | References and Further Reading Jeremy Lipschultz and Michael Hilt. 2003. “Race and Local Television News Crime Coverage,” Studies in Media & Information Literacy Education 3, No. 4: 1–10. Lucas Shaw, “TV Networks Offering More On Demand to Reduce Ad- Skipping,” Bloomberg Technology, 24 September 2014. Daniel Marans, “Did the Oregon Shooter Warn of His Plans on 4chan?” Huffington Post, 1 October 2015. Vanna Le, “Global 2000: The World’s Largest Media Companies of 2014,” Forbes, 7 May 2014. Stephanie Hayes, “Clear Channel Rejects St. Pete Pride Billboards, Organizers Say,” Tampa Bay Times, 11 June 2010. Meg James, “DOJ Clears Gannett-Belo Deal but Demands Sale of St. Louis TV Station,” Los Angeles Times, 16 December 2013. John Zaller. 2003. “A New Standard of News Quality: Burglar Alarms for the Monitorial Citizen,” Political Communication 20, No. 2: 109–130. Suzanne Ranks, “Ethiopian Famine: How Landmark BBC Report Influenced Modern Coverage,” Guardian, 22 October 2014. Hisham Aidi, “Haitians in the Dominican Republic in Legal Limbo,” Al Jazeera, 10 April 2015. “Pressure the Government of the Dominican Republic to Stop its Planned ‘Cleaning’ of 250,000 Black Dominicans.” (November 26, 2015); Led Black, “Prevent Humanitarian Tragedy in Dominican Republic,” CNN, 23 June 2015. Erik Ortiz, “George Holliday, Who Taped Rodney King Beating, Urges Others to Share Videos,” NBC, 9 June 2015. “Walter Cronkite’s ‘We Are Mired in Stalemate’ Broadcast, February 27, 1968” Digital History, (November 29, 2015). Joel Achenbach, “Cronkite and Vietnam,” Washington Post, 18 May 2012. Larry Sabato, “Our Leaders, Surprise, Have Strong Views,” New York Times, 23 February 2009. Walter Lippmann. 1922. Public Opinion. (August 29, 2015). Bernard Berelson, Paul Lazarsfeld, and William McPhee. 1954. Voting. Chicago: University of Chicago Press. George Gerbner, Larry Gross, Michael Morgan, Nancy Signorielli, and Marilyn Jackson-Beeck. 1979. “The Demonstration of Power: Violence Profile,” Journal of Communication 29, No.10: 177–196. Elizabeth A. Skewes. 2007. Message Control: How News Is Made on the Presidential Campaign Trail. Maryland: Rowman & Littlefield, 79. Stephen Farnsworth and S. Robert Lichter. 2012. “Authors’ Response: Improving News Coverage in the 2012 Presidential Campaign and Beyond,” Politics & Policy 40, No. 4: 547–556. “Early Media Coverage Focuses on Horse Race,” PBS News Hour, 12 June 2007. Stephen Ansolabehere, Roy Behr, and Shanto Iyengar. 1992. The Media Game: American Politics in the Television Age. New York: Macmillan. “Frames of Campaign Coverage,” Pew Research Center, 23 April 2012. Kiku Adatto. May 28, 1990. “The Incredible Shrinking Sound Bite,” New Republic 202, No. 22: 20–23. Erik Bucy and Maria Elizabeth Grabe. 2007. “Taking Television Seriously: A Sound and Image Bite Analysis of Presidential Campaign Coverage, 1992–2004,” Journal of Communication 57, No. 4: 652–675. Craig Fehrman, “The Incredible Shrinking Sound Bite,” Bo Globe, 2 January 2011. “Crossfire: Jon Stewart’s America,” CNN, 15 October 2004. Paul Begala, “Begala: The day Jon Stewart blew up my show,” CNN, 12 February 2015. Pew Research Center: Journalism & Media Staff, “Coverage of the Candidates by Media Sector and Cable Outlet,” 1 November 2012. “Winning the Media Campaign 2012,” Pew Research Center, 2 November 2012. Fred Greenstein. 2009. The Presidential Difference. Princeton, NJ: Princeton University Press. “Dan Rather versus Richard Nixon, 1974,” YouTube video, :46, from the National Association of Broadcasters annual convention in Houston on March 19,1974, posted by “thecelebratedmisterk,” (November 30, 2015); “‘A Conversation With the President,’ Interview With Dan Rather of the Columbia Broadcasting System,” The American Presidency Project, 2 January 1972. Wolf Blitzer, “Dan Rather’s Stand,” CNN, 10 September 2004. Matthew Eshbaugh-Soha and Jeffrey Peake. 2011. Breaking Through the Noise: Presidential Leadership, Public Opinion, and the News Media. Stanford, CA: Stanford University Press. Gary Lee Malecha and Daniel J. Reagan. 2011. The Public Congress: Congressional Deliberation in a New Media Age. New York: Routledge. Frank R. Baumgartner, Bryan D. Jones, and Beth L. Leech. 1997. “Media Attention and Congressional Agendas,” In Do The Media Govern? Politicians, Voters, and Reporters in America, eds. Shanto Iyengar and Richard Reeves. Thousand Oaks, CA: Sage. George Edwards and Dan Wood. 1999. “Who Influences Whom? The President, Congress, and the Media,” American Political Science Review 93, No 2: 327–344; Yue Tan and David Weaver. 2007. “Agenda-Setting Effects Among the Media, the Public, and Congress, 1946–2004,” Journalism & Mass Communication Quarterly 84, No. 4: 729–745. Ally Fogg, “Crime Is Falling. Now Let’s Reduce Fear of Crime,” Guardian, 24 April 24 2013. Travis L. Dixon. 2008. “Crime News and Racialized Beliefs: Understanding the Relationship between Local News Viewing and Perceptions of African Americans and Crime,” Journal of Communication 58, No. 1: 106–125. Travis Dixon. 2015. “Good Guys Are Still Always in White? Positive Change and Continued Misrepresentation of Race and Crime on Local Television News,” Communication Research, doi:10.1177/0093650215579223. Travis L. Dixon. 2008. “Network News and Racial Beliefs: Exploring the Connection between National Television News Exposure and Stereotypical Perceptions of African Americans,” Journal of Communication 58, No. 2: 321–337. Martin Gilens. 1996. “Race and Poverty in America: Public Misperceptions and the American News Media,” Public Opinion Quarterly 60, No. 4: 515–541. Dixon. “Crime News and Racialized Beliefs.” Gilens. “Race and Poverty in America.” Shanto Iyengar and Donald R. Kinder. 1987. News That Matters. Chicago: University of Chicago Press. Daniel C. Hallin. 2015. “The Dynamics of Immigration Coverage in Comparative Perspective,” American Behavioral Scientist 59, No. 7: 876–885. Kay Mills. 1996. “What Difference Do Women Journalists Make?” In Women, the Media and Politics, ed. Pippa Norris. Oxford, UK: Oxford University Press, 43. Kim Fridkin Kahn and Edie N. Goldenberg. 1997. “The Media: Obstacle or Ally of Feminists?” In Do the Media Govern? eds. Shanto Iyengar and Richard Reeves. Thousand Oaks, CA: Sage. Barbara Walters, “Ms. Walters Reflects,” Vanity Fair, 31 May 2008. Mills. “What Difference Do Women Journalists Make?” Mills. “What Difference Do Women Journalists Make?” Kahn and Goldenberg, “The Media: Obstacle or Ally of Feminists?” Kim Fridkin Kahn. 1994. “Does Gender Make a Difference? An Experimental Examination of Sex Stereotypes and Press Patterns in Statewide Campaigns,” American Journal of Political Science 38, No. 1: 162–195. John David Rausch, Mark Rozell, and Harry L. Wilson. 1999. “When Women Lose: A Study of Media Coverage of Two Gubernatorial Campaigns,” Women & Politics 20, No. 4: 1–22. Sarah Allen Gershon. 2013. “Media Coverage of Minority Congresswomen and Voter Evaluations: Evidence from an Online Experimental Study,” Political Research Quarterly 66, No. 3: 702–714. Jennifer Lawless and Richard Logan Fox. 2005. It Takes a Candidate: Why Women Don’t Run for Office. Cambridge: Cambridge University Press. Brittany L. Stalsburg, Running with Strollers: The Impact of Family Life on Political Ambition (PDF). Eagleton Institute of Politics, Spring 2012, Unpublished Paper. (August 28, 2015). Christina Walker, “Is Sarah Palin Being Held to an Unfair Standard?” CNN, 8 September 2008. Dana Bash, “Palin’s Teen Daughter is Pregnant,” CNN, 1 September 2008. Jimmy Orr, “Palin Wardrobe Controversy Heightens – Todd is a Cheapo!” Christian Science Monitor, 26 October 2008 Licensing and Attribution CC LICENSED CONTENT, SHARED PREVIOUSLY Revision and Adaptation. Authored by: Daniel M. Regalado. License: CC BY: Attribution CC LICENSED CONTENT, SHARED PREVIOUSLY American Government. Authored by: OpenStax. Provided by: OpenStax; Rice University. Located at: http://cnx.org/contents/5bcc0e59-7345-421d-8507-a1e4608685e8@18.11. License: CC BY: Attribution. License Terms: Download for free at http://cnx.org/contents/5bcc0e59-7345-421d-8507-a1e4608685e8@18.11. Media Functions An error occurred. Try watching this video on www.youtube.com, or enable JavaScript if it is disabled in your browser. Media Regulation: Crash Course Government and Politics #45  The media exist to fill a number of functions. Whether the medium is a newspaper, a radio, or a television newscast, a corporation behind the scenes must bring in revenue and pay for the cost of the product. Revenue comes from advertising and sponsors, like McDonald’s, Ford Motor Company, and other large corporations. But corporations will not pay for advertising if there are no viewers or readers. So all programs and publications need to entertain, inform, or interest the public and maintain a steady stream of consumers. In the end, what attracts viewers and advertisers is what survives. The media are also watchdogs of society and of public officials. Some refer to the media as the fourth estate, with the branches of government being the first three estates and the media equally participating as the fourth. This role helps maintain democracy and keeps the government accountable for its actions, even if a branch of the government is reluctant to open itself to public scrutiny. As much as social scientists would like citizens to be informed and involved in politics and events, the reality is that we are not. So the media, especially journalists, keep an eye on what is happening and sounds an alarm when the public needs to pay attention. The media also engages in agenda setting, which is the act of choosing which issues or topics deserve public discussion. For example, in the early 1980s, famine in Ethiopia drew worldwide attention, which resulted in increased charitable giving to the country. Yet the famine had been going on for a long time before it was discovered by western media. Even after the discovery, it took video footage to gain the attention of the British and U.S. populations and start the aid flowing.Today, numerous examples of agenda setting show how important the media are when trying to prevent further emergencies or humanitarian crises. In the spring of 2015, when the Dominican Republic was preparing to exile Haitians and undocumented (or under-documented) residents, major U.S. news outlets remained silent. However, once the story had been covered several times by Al Jazeera, a state-funded broadcast company based in Qatar, ABC, the New York Times, and other network outlets followed. With major network coverage came public pressure for the U.S. government to act on behalf of the Haitians. Before the Internet, traditional media determined whether citizen photographs or video footage would become “news.” In 1991, a private citizen’s camcorder footage showed four police officers beating an African American motorist named Rodney King in Los Angeles. After appearing on local independent television station, KTLA-TV, and then the national news, the event began a national discussion on police brutality and ignited riots in Los Angeles. The agenda-setting power of traditional media has begun to be appropriated by social media and smartphones, however. Tumblr, Facebook, YouTube, and other Internet sites allow witnesses to instantly upload images and accounts of events and forward the link to friends. Some uploads go viral and attract the attention of the mainstream media, but large network newscasts and major newspapers are still more powerful at initiating or changing a discussion. The media also promote the public good by offering a platform for public debate and improving citizen awareness. Network news informs the electorate about national issues, elections, and international news. The New York Times, Los Angeles Times, NBC Nightly News, and other outlets make sure voters can easily find out what issues affect the nation. Is terrorism on the rise? Is the dollar weakening? The network news hosts national debates during presidential elections, broadcasts major presidential addresses, and interviews political leaders during times of crisis. Cable news networks now provide coverage of all these topics as well. Local news has a larger job, despite small budgets and fewer resources. Local government and local economic policy have a strong and immediate effect on citizens. Is the city government planning on changing property tax rates? Will the school district change the way Common Core tests are administered? When and where is the next town hall meeting or public forum to be held? Local and social media provide a forum for protest and discussion of issues that matter to the community.  Figure 14.11 Meetings of local governance, such as this city council meeting in Fullerton, California, are rarely attended by more than gadflies and journalists. Image credit: Calwatch. CC BY-SA 3.0  While journalists reporting the news try to present information in an unbiased fashion, sometimes the public seeks opinion and analysis of complicated issues that affect various populations differently, like healthcare reform and the Affordable Care Act. This type of coverage may come in the form of editorials, commentaries, Op-Ed columns, and blogs. These forums allow the editorial staff and informed columnists to express a personal belief and attempt to persuade. If opinion writers are trusted by the public, they have influence. Walter Cronkite, reporting from Vietnam, had a loyal following. In a broadcast following the Tet Offensive in 1968, Cronkite expressed concern that the United States was mired in a conflict that would end in a stalemate. His coverage was based on opinion after viewing the war from the ground. Although the number of people supporting the war had dwindled by this time, Cronkite’s commentary bolstered opposition. Like editorials, commentaries contain opinion and are often written by specialists in a field. Larry Sabato, a prominent political science professor at the University of Virginia, occasionally writes his thoughts for the New York Times. These pieces are based on his expertise in politics and elections. Blogs offer more personalized coverage, addressing specific concerns and perspectives for a limited group of readers. Nate Silver’s blog, FiveThirtyEight, focuses on elections and politics.
oercommons
2025-03-18T00:35:06.381682
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66355/overview", "title": "Texas Government 2.0, Public Opinion and the Media in Texas", "author": null }
https://oercommons.org/courseware/lesson/66356/overview
Glossary Overview Glossary Glossary: Public Opinion and the Media in Texas agenda setting: the media’s ability to choose which issues or topics get attention agent of political socialization: a person or entity that teaches and influences others about politics through use of information attitudes: represent the preferences we form based on our life experiences and values; affected by our personal beliefs bandwagon effect: occurs when the media pays more attention to candidates who poll well during the fall and the first few primaries beliefs: closely held ideas that support our values and expectations about life and politics Bradley effect: theory concerning observed discrepancies between voter opinion polls and election outcomes in government elections where a white candidate and non-white candidate run against one another; the theory proposes that some voters who intend to vote for the white candidate would nonetheless tell pollsters that they are undecided or likely to vote for the non-white candidate covert content: ideologically slanted information presented as unbiased information in order to influence public opinion cultivation theory: hypothesizes that media develops a person’s view of the world by presenting a perceived reality episodic framing: occurs when a story focuses on isolated details or specifics rather than looking broadly at a whole issue favorability polls: a public opinion poll that measures a public's positive feelings about a candidate or politician mass media: the collection of all media forms that communicate information to the general public media: the number of different communication formats, from television media to print media overt content: political information whose author makes clear that only one side is presented pack journalism: journalists follow one another rather than digging for their own stories, often leading to shallow press coverage political socialization: the process by which we are trained to understand and join a country’s political world public opinion: a collection of popular views about something. For example, a person, a local or national event, or a new idea public relations: biased communication intended to improve the image of people, companies, or organizations racial framing: a type of media framing in which socially constructed frames about specific racial groups are repackaged and circulated through newspapers, magazines, billboards, music, social media, television, film, and radio; these frames influence media audiences to recall, evaluate, and interpret an issue in particular ways social media: a set of applications or web platforms that allow users to immediately communicate with one another thematic framing: takes a broad look at an issue and skips numbers or details; it looks at how the issue has changed over a long period of time and what has led to it Licenses and Attributions CC LICENSED CONTENT, SHARED PREVIOUSLY American Government. What is the Media? Glossary Authored by: OpenStax. Provided by: OpenStax; Rice University. Located At: https://cnx.org/contents/W8wOWXNF@12.1:Y1CfqFju@5/Preface. License: CC BY: Attribution. License Terms: Download for free at http://cnx.org/contents/9e28f580-0d1b-4d72- 8795-c48329947ac2@1. CC LICENSED CONTENT, ORIGINAL Public Opinion and the Media in Texas: Glossary. Authored by: panOpen. License: CC BY: Attribution
oercommons
2025-03-18T00:35:06.422388
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66356/overview", "title": "Texas Government 2.0, Public Opinion and the Media in Texas", "author": null }
https://oercommons.org/courseware/lesson/66357/overview
Assessment Overview This is a quiz for Chapter 14. Texas Government Chapter Fourteen Quiz Check your knowledge of Chapter Fourteen by taking the quiz linked below. The quiz will open in a new browser window or tab. This is a quiz for Chapter 14. Check your knowledge of Chapter Fourteen by taking the quiz linked below. The quiz will open in a new browser window or tab.
oercommons
2025-03-18T00:35:06.439309
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/66357/overview", "title": "Texas Government 2.0, Public Opinion and the Media in Texas", "author": null }
https://oercommons.org/courseware/lesson/85007/overview
1.11 Intercropping 1.12 Field Crop Uses 1.2 Agriculture 1.3 Crops 1.4 Field Crops 1.5 High Production Crops 1.6 Field Crop Production Practices_Row Crops 1.7 Cover Crops 1.8 Cover Crop Benefits 1.9 Cover Crop Risks 1_Field-Crops Field Crops Overview Title Image “Line of crops near Littleport; Cambridgeshire” by Keith Evans is licensed under CC BY-SA 2.0 Did you have an idea for improving this content? We’d love your input. Introduction Lesson Objectives Identify examples of field crops. Select examples of common field crops from the list provided. Identify common uses of field crops. Select common uses of field crops from the list provided. Explain common production practices for field crops. Describe common field crop production practices. Evaluate the economic impact of field crops. Key Terms animal feed - the hull and bran from grain that is fed to livestock cereals - grass grown for the edible component of its grain corn syrup - refined starch slurry crop - plants that are cultivated either for sale or for subsistence crop rotation - growing different crops in the same area in different seasons field crops - plants grown commercially in large areas grain - small, dry seed that is harvested for human or animal consumption intercropping - growing different crops together for mutual benefits oats - grain grown in cool climates, widely used for human and animal consumption oil - fat extracted from some plant seeds, fruits and mesocarps oil crops - plants grown for the oil they produce row crop - crops planted in rows wide enough for machinery access silage - anaerobically (without oxygen) fermented corn stalks and other green plants sweeteners - sugar that provides humans and animals energy in the form of carbohydrates Introduction Plants are grown and harvested, from personal use to large scale operations that cover millions of acres. Understanding plants, their uses, and how they are grown are important aspects of agriculture. Supporting American diets and some of those abroad, United States farmers are tasked with producing a large number of crops. Crops extend beyond the monumental task of feeding humans and animals; their fibers and oils are also used in paper, clothing, rope, biofuels and other valuable materials. Agriculture Tremendous natural variations exist among the individuals of any plant species. The traits that define color, shape, flavor, height, yield, and resistance to pests, pathogens, and environmental stresses are not fixed within a species. Individual plants and animals from the same species can be easily distinguished based on these characteristics. Since the beginning of agriculture, humans hve unconsciously been selecting plants and animals with desirable traits, such as large-sized grains, pods, fruits, and vegetables; sweeter and less-seeded fruits; less bitter and nonprickly vegetables; cereals with large panicles and tough rachis; and non-seed-shattering plants. As a consequence of such artificial selections over many generations, unprecedented changes occurred in cultivated plants that set them apart from their ancestors and wild relatives. For example, the relentless efforts of humans led to the development of various crops, such as corn from a wild-grass teosinte; long-spiked, six-row barley from short-spiked, two-row wild barley; large tomatoes from a small berry; and a variety of less-seeded fruits and palatable vegetables from their bitter wild ancestors (see Figure 5.1.1). These plants—enriched in traits that favor higher yields, productive harvest, and increased palatability—would not have come into being without the persistence of humans since the dawn of agriculture. For several millennia, humans have put tremendous effort into providing protection and ensuring the continuous propagation of cultivated plants. We provide fertilizers, pesticides, and water, as well as provide services such as weeding to promote the growth of crop plants. Thus, domesticated plants need humans for their survival as much as human survival depends on them. These species cannot survive in nature for a long time by themselves, but they have spread globally with human help. For some species, this dependency on humans has become total. For example, maize absolutely depends on humans for its survival. If you leave a mature cob in the field, some of its seeds may germinate on the cob, but they will soon die due to the lack of space for emerging seedlings to grow. Furthermore, maize seeds do not fall spontaneously and need human help to be detached from the cob and planted in the soil. This mutual interdependence between crop plants and humans was achieved over several millennia, and this is the historical process known as domestication. Crops There are many definitions of the word “crop”. When referring to plants, the United States Department of Agriculture considers crops to be those plants that are cultivated either for sale or for subsistence. There are many plants that are specialty crops when cultivated, but are also collected from wild populations. Wild plants are not considered specialty crops even though they may be used for the same purpose as cultivated plants. This is somewhat common among medicinal herbs and woodland plants. There are a number of native ferns that are collected from wild populations for use in the floral trade. There are also a number of marine plants that are collected from wild populations both for direct consumption and for industrial uses. Although these are specialty uses, wild plants are not considered specialty crops by USDA. However, natural populations of native plants that are brought into cultivation, such as sugar maple trees, pecans, blueberry, huckleberry and cranberry are considered specialty crops by USDA. In order for a plant to be considered cultivated, some form of management must be applied. The intensity of the management is not critical to determining whether a plant is cultivated or not. This definition includes plants or plant products harvested from “wild areas” whose populations are managed, monitored and documented to ensure long-term, sustainable production. If a naturally occurring population of plants is brought under management and that plant satisfies the definition of specialty crop, then those plants would be considered specialty crops; however, it is common for such plants to be designated “wild-harvested” for marketing purposes. For the purpose of some programs in which state agencies are the eligible entities, states may choose to define plants collected from the wild as specialty crops. The classifications of cultivated and wild-harvested may both apply to one kind of plant, but the final designation will be determined by how it is grown and what for. For instance, amaranth may be grown as a leafy green, or it may be grown as a grain. Leafy greens are vegetables; therefore, amaranth grown in such a manner would be considered a specialty crop. However, grains are not specialty crops; therefore, amaranth grown for grain would not be considered a specialty crop. Field Crops A large majority of agricultural acreage and crop revenue is dedicated to growing plants commercially in large areas: creating field crops. Field crops include but are not limited to corn, cotton, oats, rice, sorghum, soybeans, winter wheat, durum wheat, and spring wheat (Figure 5.1.2). In 2021, corn, soybeans, and all kinds of both wheat and cotton covered 238.7 million acres in the United States, which is about 12.5% of the country’s total area (USDA Acreage Report, 2021). Many field crops are harvested for grain (Figure 5.1.3). Grain is a small, dry seed that is used for human or animal consumption. Grass grown for the edible component of its grain is known as cereal. Wheat is the most important kind of grain grown in temperate countries, as it is used to make flour for staple foods, such as bread and pasta. There are differences between whole grains and refined grains. A whole grain consists of the entire grain seed of a plant. This seed, also known as the kernel, is made up of three key parts: the bran, the germ, and the endosperm, as shown in Figure 5.1.4. Whole grains can be eaten whole, cracked, split, flaked, or ground. Most often they are milled into flour and used to make breads, cereals, pasta, crackers, and other grain‐based foods. Refining typically removes the bran and the germ, leaving only the endosperm. Without the bran and germ, about 25% of the grain’s protein is lost, along with at least seventeen key nutrients. Refined products still contribute valuable nutrients because processors add back some vitamins and minerals to enrich refined grains. But whole grains still provide more protein, more fiber, and many other important vitamins and minerals. High Production Crops High Production Crops About a third of America’s corn crop is used for feeding cattle, hogs, and poultry in the U.S. Corn provides the “carbs” in animal feed, while soybeans provide the protein. It takes a couple of bushels of American corn to make corn-fed steak; by some estimates, a beef cow can eat a ton of corn if raised in a feedlot. Both dairy cows and beef cows also consume silage, which is fermented corn stalks and other green plants. Corn has hundreds of uses. It is used to make breakfast cereal, tortilla chips, grits, canned beer, soda, cooking oil, and bio-degradable packing materials. It’s the key ingredient in the growing medium for life-saving medicines including penicillin. Corn gluten meal is used on flower beds to prevent weeds. Just over a third of the corn crop is used to make ethanol, which serves as a renewable fuel additive to gasoline. The Renewable Fuel Standard requires that 10% of gasoline be renewable fuel, but you can find E15 (15 percent ethanol) or E85 (85 percent) ethanol in some areas, particularly in the Midwest. The rest of the corn crop is used for human food, beverages, and industrial uses in the U.S., or exported to other countries for food or feed use. Field Crop Production Practices: Row Crops Row Crop A row crop is a crop that can be planted in rows wide enough to allow it to be tilled or otherwise cultivated by agricultural machinery—machinery tailored for the seasonal activities of row crops. Such crops are sown by drilling rather than broadcasting; this distinction is significant in crop rotation strategies, where land is planted with row crops, commodity food grains, and sod-forming crops in a sequence meant to protect the quality of the soil while maximizing the soil's annual productivity. Strategic agricultural planning takes many factors such as water availability and soil quality into consideration. As much as 20% of crops worldwide are irrigated, with some crops such as rice and maize benefiting from the extra water. During the growing season, the inter-row spaces are hoed two to four times and the rows are weeded to conserve moisture and improve aeration. As a result, the soil’s microbiological activity increases and mobilization of nutrients is intensified. Row crops are valuable precursors of spring grain crops, flax, and hemp. The beneficial effect of row crops extends to the second crop. Examples of row crops include sunflower, potato, canola, dry bean, field pea, flax, safflower, buckwheat, cotton, maize, soybeans, and sugar beets. Cover Crops The harvest of low residue row crops, such as corn silage or soybeans, usually means the soil surface of a field will be left bare until the next crop is planted, when a new plant canopy is established. In the Northeast, the next planting may be 5-7 months away. That's a long time for the bare soil to be subjected to erosion caused by rainfall, snowmelt, or wind. For that reason, cover crops are usually established in the fall months and remain during the winter. Properly planned and executed, cover crops will protect farmland during this vulnerable period. In the spring they are then killed and left on the surface as residue for conservation tillage or are incorporated into the soil. There are, of course, risks and benefits associated with cover crops. Cover crop species and management should be planned objectively regarding soil erosion, water quality, nutrient management, forage and/or soil quality. Cover Crop Benefits The protective canopy formed by a cover crop reduces the impact of rain drops on the soil surface, thereby decreasing the breakdown of soils aggregates. This greatly reduces soil erosion and runoff, and increases infiltration. Decreased soil loss and runoff translates to reduced transport of valuable nutrients, pesticides, herbicides, and harmful pathogens associated with manure from farmland that degrade the quality of bodies of water and pose a threat to human health. A cover crop slows the velocity of runoff from rainfall and snowmelt, reducing soil loss due to sheet and rill erosion. Over time, a cover crop regimen will increase soil organic matter, leading to improvements in soil structure, stability, and increased moisture and nutrient holding capacity for plant growth. These properties will reduce runoff through improved infiltration (movement of water through the soil surface) and percolation (movement of water through the soil profile). A cover crop will increase soil quality by improving the biological, chemical, and physical soil properties. As a “trap crop”, a cover crop will store nutrients from manure, mineralized organic nitrogen, or underutilized fertilizer until the following years’ crop can utilize them, reducing nutrient runoff and leaching. When a cover crop is managed for its contribution to soil nitrogen, the application of a nitrogen fertilizer for the subsequent crop may be less, thereby lowering costs of production, reducing nitrogen losses to the environment, and decreasing the need for purchased nitrogen fertilizer that is produced using fossil fuels. Cover crops will reduce or mitigate soil compaction. Deep tap roots of some cover crops grown in the fall and spring, when compacted layers are relatively soft, can penetrate these layers. Cover crops result in better tillage and traffic conditions by reducing soil moisture deeper into soil profile through evapotranspiration. Improved soil structure and stability can improve the soil’s capacity to withstand heavy farm equipment, resulting in less subsurface compaction. A cover crop provides a natural means of suppressing soil diseases and pests. It can also serve as a mulch or cover to assist in suppressing weed growth. A cover crop can provide high-quality material for grazing livestock and can provide food and habitat for wildlife, beneficial insects, and pollinators. Cover Crop Risks While proper planning and management of a cover crop can help minimize or eliminate risks, planting a cover crop does involve some risks and potential drawbacks. Fields with heavy plant residues or early season cover crop weeds or growth are more susceptible to increases in populations of soil insects such as cut worms, army worms, and slugs; however, proper pest scouting and treatment, if needed, can reduce the risk of damage by pests. Growing the wrong cover crop with inadequate rotations may create problem with diseases because the cover crop may increase the occurrence of a disease in the subsequent crop if it happens to be a host for the organism that causes the disease. For example, the use of a brassica cover crop such as a forage turnip may harbor insects and diseases for a brassica crop like broccoli. Therefore, the choice and management of cover crops should be made with existing weed, disease, nematodes, and other soil problems in mind. Some cover crops need to be terminated early to prevent management problems with soil fertility, over mature cereal rye with increased Carbon Nitrogen ratio (C:N) will tie up nitrogen needed for early corn growth. The cost of establishing and maintaining a cover crop may outweigh some of the benefits. The added cost of seed, planting, management, disking and incorporating the cover crop, and the possibility of planting delays, may make cover crops unfeasible for some farmers. Crop Rotation Most corn and soybeans are grown in rotation with other row crops, while most cotton is grown successively in the same fields. The most common wheat rotation includes fallow or idle land. Soil conserving crops in rotation with corn are more commonly used on highly erodible land (HEL) than on non-HEL. Intercropping Incorporating intercropping (growing different crops together) principles into an agricultural operation increases diversity and interaction between plants, arthropods, mammals, birds, and microorganisms resulting in a more stable crop-ecosystem and a more efficient use of space, water, sunlight and nutrients. Furthermore, soil health is benefited by increasing ground coverage with living vegetation, which reduces erosion, as well as by increasing the quantity and diversity of root exudates, which enhance soil fauna. This collaborative type of crop management mimics nature and is subject to fewer pest outbreaks, improved nutrient cycling and crop nutrient uptake, and increased water infiltration and moisture retention. Soil quality, water quality and wildlife habitat all benefit. Relay, row and strip are three types of intercropping strategies. - Relay intercropping: growing two or more crops on the same field with the planting of the second crop after the first one (e.g., over seeding of a clover cover crop into cotton during defoliation; planting of clover at lay by time in corn). - Row intercropping: growing two or more crops simultaneously in the same field with at least one crop planted in rows (e.g., planting corn in the rows and interseeding sorghum between the rows, harvesting all as silage; plant vegetables, cereal grains, perennial covers or annual covers between orchard tree rows). - Strip intercropping: growing crops in alternate strips wide enough to permit separate crop production machinery, but close enough for crops to interact (e.g., planting alternating strips of corn and soybeans 6 rows each or alternating strips of corn and Sudan grass). Generally, the maximum width of individual strips for effective interaction of crop pests and their natural enemies is about 30 ft. Field Crop Uses The corn refining industry produces hundreds of products and byproducts, such as high fructose corn syrup (HFCS), corn syrup, starches, animal feed, oil, and alcohol. Modified starches are manufactured for various food and trade industries for which unmodified starches are not suitable. For example, large quantities of modified starches go into the manufacture of paper products as binding for the fiber. Modifying is accomplished in tanks that treat the starch slurry with selected chemicals, such as hydrochloric acid, to produce acid-modified starch; sodium hypochlorite, to produce oxidized starch; and ethylene oxide, to produce hydroxyethyl starches. The treated starch is then washed, dried, and packaged for distribution. Across the corn wet milling industry, about 80 percent of starch slurry goes to corn syrup, sugar, and alcohol production. The relative amounts of starch slurry used for corn syrup, sugar, and alcohol production vary widely among plants. Syrups and sugars are sweeteners formed by hydrolyzing the starch, with partial hydrolysis resulting in corn syrup, and complete hydrolysis producing corn sugar. The hydrolysis step can be accomplished using mineral acids, enzymes, or a combination of both. The hydrolyzed product is then refined, which is the decolorization with activated carbon and the removal of inorganic salt impurities with ion exchange resins. The refined syrup is concentrated to the desired level in evaporators and is cooled for storage and shipping. Dextrose production is quite similar to corn syrup production, the major difference being that the hydrolysis process is allowed to go to completion. The hydrolyzed liquor is refined with activated carbon and ion exchange resins, to remove color and inorganic salts, and the product stream is concentrated by evaporation to the 70 to 75 percent solids range. After cooling, the liquor is transferred to crystallizing vessels, where it is seeded with sugar crystals from previous batches. The solution is held for several days while the contents are further cooled and the dextrose crystallizes. After about 60 percent of the dextrose solids crystallize, they are removed from the liquid by centrifuges, are dried, and are packed for shipment. A smaller portion of the syrup refinery is devoted to the production of corn syrup solids. In this operation, refined corn syrup is further concentrated by evaporation to a high dry substance level. The syrup is then solidified by rapid cooling and subsequently milled to form an amorphous crystalline product. Ethanol is produced by the addition of enzymes to the pure starch slurry to hydrolyze the starch to fermentable sugars. Following hydrolysis, yeast is added to initiate the fermentation process. After about 2 days, approximately 90 percent of the starch is converted to ethanol. The fermentation broth is transferred to a still where the ethanol (about 50 vol%) is distilled. Subsequent distillation and treatment steps produce 95 percent, absolute, or denatured ethanol. The object of silage making is to preserve the harvested crop by anaerobic (without oxygen) fermentation. This process uses bacteria to convert soluble carbohydrates into acetic and lactic acid, which "pickles" the crop. In a well-sealed silo, it can be stored for long periods of time without losing quality. To produce high-quality corn silage, it is important to do a good job in growing, harvesting and preserving the crop. Corn silage is a high-quality forage crop that is used on many dairy farms and on some beef cattle farms in Tennessee. Its popularity is due to the high yield of a very digestible, high-energy crop, and the ease of adapting it to mechanized harvesting and feeding. Corn for silage fits ideally into no-till and double-cropping programs. Oil-bearing crops or oil crops include both annual (usually called oilseeds) and perennial plants whose seeds, fruits or mesocarp, and nuts are valued mainly for the edible or industrial oils that are extracted from them. Some of the crops included in this chapter are also fiber crops in that both the seeds and the fibers are harvested from the same plant. Such crops include: coconuts, yielding coir from the mesocarp; kapok fruit; seed cotton; linseed; and hempseed. In the case of several other crops, both the pulp of the fruit and the kernels are used for oil. The main crops of this type are oil-palm fruit and tallow tree seeds. Only 5-6 percent of the world production of oil crops is used for seed (oilseeds) and animal feed, while about 8 percent is used for food. Edible processed products from oil crops, other than oil, include flour, flakes or grits, groundnut preparations (butter, salted nuts, candy), preserved olives, desiccated coconut and fermented and non-fermented soya products. The remaining 86 percent is processed into oil. The fat content of oil crops varies widely. Fat content ranges from as low as 10-15 percent of the weight of coconuts to over 50 percent of the weight of sesame seeds and palm kernels. Carbohydrates, mainly polysaccharides, range from 15 to 30 percent in the oilseeds, but are generally lower in other oil-bearing crops. The protein content is very high in soybeans, at up to 40 percent, but is much lower in many other oilseeds, at 15-25 percent, and is lower still in some other oil-bearing crops. The major U.S. oilseed crops are soybeans, cottonseed, sunflower seed, canola, rapeseed, and peanuts. Soybeans are the dominant oilseed in the United States, accounting for about 90 percent of U.S. oilseed production. Dig Deeper Attributions Title Image “Line of crops near Littleport; Cambridgeshire” by Keith Evans is licensed under CC BY-SA 2.0 "4.2 Crop Rotations" by the United States Department of Agriculture is in the Public Domain. by the United States Department of Agriculture is in the Public Domain. "Acreage Report" by the United States Department of Agriculture is in the Public Domain. "Corn and Other Feedgrains" by the United States Department of Agriculture is in the Public Domain. "Corn is America's Largest Crop in 2019" by the United States Department of Agriculture is in the Public Domain. "Cover Crops - Keeping Soil in Place While Providing Other Benefits" by the United States Department of Agriculture Natural Resources Conservation Service is in the Public Domain. "Definition of Specialty Crop" by the United States Department of Agriculture is in the Public Domain. "Field Crops" by the United States Department of Agriculture is in the Public Domain. “Intercropping to Improve Soil Quality and Increase Biodiversity” by the United States Department of Agriculture is in the Public Domain. "Oil Crops at a Glance" by Mark Ash and Todd Hubbs, United States Department of Agriculture is in the Public Domain. "Row Crop" by Wikipedia is licensed CC BY-SA. "SP434D Corn Silage," The University of Tennessee Agricultural Extension Service, SP434D-5M-9/98 E12-2015-00-082-99, used with permission. "Whole Grains" is licensed under CC BY 4.0.
oercommons
2025-03-18T00:35:06.537523
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/85007/overview", "title": "Statewide Dual Credit Introduction to Plant Science, Plant Classification and Use", "author": null }
https://oercommons.org/courseware/lesson/87609/overview
2.2 Forage Crops 2.3 Common Uses of Forage Crops 2.4 Establishment of Cover Crops 2.5 Annual Cover Crop Grazing Options 2.6 Alfalfa 2.7 No-Till Practices 2.8 Crop Rotations 2.9 Storing Forage Crops 2_Forage-Crops Forage Crops Overview Title Image: Cows in a pasture Credit: Kevin Sedivec, North Dakota State University Extension; licensed CC BY NC SA Did you have an idea for improving this content? We’d love your input. Introduction Lesson Objectives - Identify examples of forage crops. - Select examples of common forage crops from the list provided. - Identify common uses of forage crops. - Select common uses of forage crops from the list provided. - Explain common production practices for forage crops. - Describe common forage crop production practices. - Evaluate the economic impact of forage crops. Key Terms alfalfa - a pasture crop used for grazing or hay production forage crops - plants grown specifically to be grazed by livestock or conserved as hay pulses - annual leguminous crops yielding from one to 12 grains or seeds of variable size, shape, and color within a pod Introduction The value of cover crops to the environment and our knowledge of their usefulness continues to grow. The use of cover crops in a cropping rotation or as an integrated livestock-cropping system has become a popular option for farmers. Although cover crops have been used for centuries, farmers and ranchers today have become more aware of management strategies to reduce soil erosion, improve soil biodiversity, increase soil nutrient retention, and promote soil water-holding capacity. Forage Crops Cover crops may provide opportunities to use cropped land for grazing livestock or to produce a harvested feed source, also known as foraging. Livestock grazing of cover crops can further recycle nutrients back into the soil. When harvested at the correct time, harvested cover crops for hay, haylage, and silage can provide a nutrient-rich winter feed. In addition to using rangelands and native or naturalized pastures for grazing, farmers seed pastures with improved grasses and legumes and cultivate forage crops for such things as hay, silage, and fresh feed. During the last 4–5 decades, plant breeders have made important contributions to livestock productivity by developing high yielding forage varieties with tolerances to biotic and abiotic stresses. Common Uses of Forage Crops Forage grasslands are used to feed livestock and globally it has been estimated that they represent 26% of land area and 70% of agricultural area (FAO, 2010). Such crops are significant economically, as the European example shows. Forage crops are usually grasses (Poaceae) or herbaceous legumes (Fabaceae). Some tree legumes such as mulga (Acacia aneura) and leadtree (Leucaena leucocephala) are also grown in desert and tropical grasslands. In the tropics, popular grasses include Napier grass (Pennisetum purpureum), Brachiaria, and Panicum species. In temperate climates, the main grasses include bentgrass (Agrostis spp.), fescue (Festuca spp.), ryegrass (Lolium spp.) and orchard grass (Dactylis spp.) or hybrids of these. For example, Festuca and Lolium hybrids has been developed from 1970s, giving rise to crops such as Festulolium pabulare which combines the superior forage quality of Lolium multiflorum with the persistence and stress tolerance of Festuca arundinacea. Some maize (Zea mays) cultivars have been specifically bred for forage. The commonly cultivated herbaceous legumes are trefoil (Lotus corniculatus), medics (Medicago spp.), clover (Trifolium spp.) and vetches (Vicia spp.). Brassica forage species include cultivars of oilseed rape (Brassica napus) and kale (Brassica oleracea). Fodder beet (Beta vulgaris) is another temperate forage. The combination of forage crops grown in any country varies depending on climate and livestock needs, however, the perennial legume lucerne or alfalfa (Medicago sativa) is the most widely cultivated as it can be grown with both temperate and tropical grasses, or as a standalone crop. This is a huge topic to review as there are so many species grown across the world, A few examples are the tropical grasses Pennisetum and Brachiaria and more prominently the temperate crops Lolium and alfalfa. Pulses are annual leguminous crops yielding from one to 12 grains or seeds of variable size, shape, and color within a pod. They are used for both food and feed. In addition to their food value, pulses also play an important role in cropping systems because of their ability to produce nitrogen and thereby enrich the soil. Pulses contain carbohydrates, mainly starches (55-65 percent of the total weight); proteins, including essential amino acids (18-25 percent, and much higher than cereals); and fat (1 - 4 percent). The remainder consists of water and inedible substances. Establishment of Cover Crops Establishment of a cover crop requires a relatively weed-free seedbed and good seed-to-soil contact, similar to other crops. Drilling the seed in will provide better establishment than broadcasting it. Fall cover crop seed can be applied aerially onto standing corn or soybeans as they are drying down, if enough sunlight can penetrate the ground between the rows. Aerial seeding only works for small seeds (such as turnip, radish, rye), but the success of establishment will depend on rainfall after seeding. Interseeding between rows at the V4 to V6 stage also has been performed successfully, whereas seeding at V6 or later does not provide much forage growth. Regardless of timing, if moisture is limited, plant growth is limited to nonexistent. Thus, in dry years, aerial seeding and interseeding are not cost-effective. Recommended seeding rates, depths and dates for each cover crop also must be considered when planning to integrate cover crops into an operation. Annual Cover Crop Grazing Options Selecting a cover crop forage or mixture of forages for grazing livestock will depend on the season of use for optimal performance, as well as seed availability and cost. For the full-season grazing period, a mixture of cool- and warm-season grasses, broadleaf crops and legume species are recommended. This type of mixture will create diversity, minimize risk due to weather conditions, extend the grazing period due to different growth stages and increase soil health benefits. Matching the proper forage species with the season of use is critical to optimize forage production potential. Seed costs can be reduced by avoiding low-production plant species while providing high-quality feed. Alfalfa Alfalfa is the fourth most widely grown crop in the United States, with an estimated annual value of 11.7 billion dollars. There are 26 million acres cut for hay with an average yield of 2.3 tons per acre. One of the most important characteristics of alfalfa is its high nutritional quality. Alfalfa contains between 15 to 22% crude protein, as well as high amounts of 10 different vitamins. Alfalfa can be a very productive crop with high levels of biomass accumulation. The record yield of one acre of alfalfa is 10 tons. Alfalfa hay is used as a feed primarily for dairy cows but also for horses, beef cattle, sheep, and other farm animals. Alfalfa (Figure 5.2.3) is a perennial cool-season legume grown widely around the world. It is a high-quality, high-yielding forage crop. Alfalfa can be utilized as hay, silage, greenchop, or in grazing systems, allowing producers to use this forage in a variety of ways that fits their farm needs. Seeding alfalfa can be challenging if certain requirements are not followed. Alfalfa is considered a “demanding crop,” so good establishment and management is essential to assure the high yields and exploitation of alfalfa’s great potential. No-Till Practices No-till practices offer several advantages over conventional establishment in crops like alfalfa. Soil conservation, moisture conservation, reduced weed pressure, and a longer planting window are just a few of the advantages. For successful no-till establishment, attention should be given to site selection, site preparation, and planting. Disturbing soil with tillage opens the organic matter up to the atmosphere. The active microorganisms of the microbiome in soil are at risk of dehydration and excessive sun exposure. When done correctly, tilling can reduce pests and weeds. This comes at a risk of nutrient depletion, a decrease of remaining nutrient bioavailability for plants, and loss of soil organic matter. Crop Rotations Planned crop rotations can increase yields, improve soil structure, reduce soil loss, conserve soil moisture, reduce fertilizer and pesticide needs, and provide other environmental and economic benefits. Many crop rotations reduce soil loss and are an option for meeting conservation compliance on highly erodible land. The growth of hay, small grain crops, or grass sod in rotation with conventionally tilled row crops reduces the soil’s exposure to wind and water and decreases total soil loss. These rotations, however, are a desirable option to farmers only when profitable markets exist or the conservation crops can be utilized by farm livestock enterprises. However, crop rotations may reduce profits when the acreage and frequency of highly profitable crops are replaced with crops earning lower returns. Storing Forage Crops Crops that are baled for later forage options should be done in an effort to maximize yield and quality. Hay should include a mix of grass and legumes, or a single species that can be harvested at the correct time to potentially meet an animal’s protein and energy requirements. A single species may provide easier hay management; however, it lacks diversity, leading to fewer soil health, pollinator and wildlife benefits. Forage Crop Economics Forages—largely grasses and legumes—are the principal source of nutrition for most ruminant livestock in developing countries, thus contributing to the supply of nutrient-dense foods like meat and milk, as well as products like leather and wool. The gross value of cultivated forages is given by the product of their area, yield, and price. However, utilized yield from grazed land is difficult to measure, and even when harvested may only be used on the farms where they are grown. Soil health benefits are difficult to quantify and can provide an economic value. Thus, market information on quantities traded and prices received for forages is very limited. If production costs exceed net profit return, a cover crop may not be an economically viable option. Factors for consideration when growing forage crops include seed cost, input resources, and quality of purchased seed that has the potential to produce high-quantity and high-quality feed. Farmers find benefit to both cattle and their crops by practicing grazing on their cover crops. Grazing cover crops does increase the fertility of the soil and aggregate stability of the soil particles, as well as may improve infiltration of water. The livestock producers also will save money in labor and fossil fuels, reducing the use of fossil fuels associated with hauling of manure and feeding cattle in a dry lot, as well as labor with feeding cattle in a dry lot. Herd health benefits also may be found with cattle grazing longer on pasture versus being confined in a dry lot setting. Attributions Alfalfa: Soybean Genomics and Improvement Laboratory: Beltsville, MD by the United States Department of Agriculture, is in the Public Domain. Annual Cover Crop Options for Grazing and Haying in the Northern Plains by the United States Department of Agriculture, is in the Public Domain. FAO (2010). Challenges and Opportunities for Carbon Sequestration in Grassland Systems: A Technical Report on Grassland Management and Climate Mitigation. (Rome: Food and Agriculture Organization of the United Nations). Improving the Yield and Nutritional Quality of Forage Crops by Nicola M. Capstaff and Anthony J. Miller is licensed CC BY 4.0. No-Till Establishment of Alfalfa by Márcia Pereira Da Silva, Renata Nave Oakes, and Gary Bates, University of Tennessee. Copyright © University of Tennessee Extension. Used with permission. Source: Food and Agriculture Organization of the United Nations. FAO. Oil-Bearing Crops and Derived Products. Accessed: August 12, 2021. https://www.fao.org/waicent/faoinfo/economic/faodef/fdef06e.htm The Extent and Economic Significance of Cultivated Forage Crops in Developing Countries by Fuglie K, Peters M and Burkart S is licensed CC BY 4.0. USDA Publication 4.2 Crop Rotations by the United States Department of Agriculture, is in the Public Domain.
oercommons
2025-03-18T00:35:06.595724
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87609/overview", "title": "Statewide Dual Credit Introduction to Plant Science, Plant Classification and Use", "author": null }
https://oercommons.org/courseware/lesson/87610/overview
3.2 Common Crops and Their Uses 3.3 Common Production Practices of Vegetable Crops 3.4 Vegetable Crop Economics 3_Vegetable-Crops Vegetable Crops Overview Farmers' Market by Natalie Maynor is licensed CC BY 2.0. Did you have an idea for improving this content? We’d love your input. Introduction Lesson Objectives - Identify examples of vegetable crops. - Select examples of common vegetable crops from the list provided. - Identify common uses of vegetable crops. - Select common uses of vegetable crops from the list provided. - Explain common production practices for vegetable crops. - Describe common vegetable crop production practices. - Evaluate the economic impact of vegetable crops. Key Terms vegetable crops - plants grown with parts that are to be consumed by humans or other animals as food Introduction Vegetables are parts of plants that are consumed by humans or other animals as food. The original meaning is still commonly used and is applied to plants collectively to refer to all edible plant matter, including the flowers, fruits, stems, leaves, roots, and seeds. An alternative definition of the term is applied somewhat arbitrarily, often by culinary and cultural tradition. It may exclude foods derived from some plants that are fruits, flowers, nuts, and cereal grains, but include savory fruits such as tomatoes and courgettes, flowers such as broccoli, and seeds such as pulses. Common Crops and Their Uses Vegetable crops are eaten by humans and animals and valued for their high nutritional content. Vegetable crops are generally classified as warm season or cool season according to the temperature ranges they require or prefer. Figure 5.3.1 shows a wide range of vegetables and their growing season in the state of Tennessee. Warm-season vegetables are most productive in higher temperature ranges (late spring, summer) and are better able to grow and produce a quality crop through summer heat. They are damaged or killed by frost and freezing conditions; even cool, non-freezing temperatures may prevent them from growing and yielding well. Therefore, growers should pay attention to local frost dates when selecting planting times. Cool-season vegetables can withstand temperatures below 32° F (how far below varies by crop and situation) and are generally more productive and have higher quality produce when grown during cooler spring and fall seasons. Because of these attributes, cool-season crops are planted in the late winter or early spring to avoid the hottest part of the summer. They can often be seeded again in the late summer to provide another crop during the fall season. New vegetable varieties are constantly being developed throughout the world. Since it is impossible to list and describe all of them, only some of the better performing commercial types are listed in the specific crop section (Figure 5.3.2), either alphabetically or in order of relative maturity from early to late. These varieties are believed to be suitable for commercial production under most conditions in the southeastern US. Common Production Practices of Vegetable Crops The development of various types of tillage practices was an integral part of the evolution of modern farming approaches. Tillage is helpful in crop production systems for purposes of weed management, incorporation of amendments such as lime and fertilizer, burial of crop residues to facilitate other field operations, disease management, and the preparation of a seedbed that is conducive to crop establishment. While the use of tillage practices provides a number of benefits to crop producers, researchers have also learned that the soil disturbance associated with tillage has some drawbacks. In a nutshell, tillage over time results in the degradation of several soil properties that are important to crop productivity. One of these properties is organic matter content. Organic matter is important because it contributes to the water and nutrient holding capacity of soil and to the maintenance of a desirable soil structure. These soil properties, in turn, allow soil to better support the weight of equipment and workers. In warm southern climates the loss of organic matter due to tillage is even more pronounced than in cooler climates. Tilled soil is also less hospitable to a variety of soil organisms including microbes, insects, and other small animals. When present in adequate numbers these are beneficial for various reasons. When minimum tillage is used, soil structure is improved by the release of exudates of various organisms that glue soil particles together into larger, more desirable aggregates. Plant roots benefit from the increased presence of pore spaces in the soil such as earthworm channels, and plant diseases may also be reduced by the increased diversity of soil microorganisms. Adoption of minimum tillage in vegetable production is possible but requires careful planning and preparation. Making a transition to minimum tillage will affect several vegetable production field operations. For example, one common objective of minimum tillage is to retain crop residues on the soil surface. These residues are beneficial for reducing soil erosion but also may interfere with the seeding of crops, particularly small-seeded vegetable crops. Similarly, cultivation, often an important measure for controlling weeds in vegetables, may require different equipment than what the farmer is able to use in conventionally tilled fields. In general, it may be best to start with those vegetables that are grown similarly to agronomic row crops or to use crops that can be established by transplanting through crop residues. Row crop examples include sweet corn and cowpeas. Examples of vegetables that are easily transplanted include tomato, pepper, squash, and watermelon. Growers interested in adopting minimum tillage practices should begin by learning about the practices currently employed by agronomic crop producers and others who grow vegetables using reduced tillage. One such practice is to limit tillage and seedbed preparation to a narrow strip where the crop will be planted. This may be done in combination with the use of cover crops that are killed by rolling and crimping prior to tilling the strip. This method has been used successfully for vegetables such as tomatoes and cucurbits. Many soils that are not productive due to poor physical properties can be restored and made more productive through the continued use of cover crops. Cover crops can provide many benefits to soils that include reducing the buildup of soilborne disease and arthropod pests, increasing soil organic matter, suppressing weeds, improving soil structure, promoting beneficial soil microorganisms, improving nutrient cycling, and reducing soil erosion. Each cover crop can offer different potential benefits to a production system and not every cover crop will work for each grower’s intended purpose. Many cover crops can reduce or limit the build-up of soil-borne disease and insect pests that damage vegetable crops. Prevalent disease and insect pressure should be considered when selecting a cover crop, as some cover crops could increase the severity of these issues. In some cases, specific cultivars of cover crops can differ in their host status to various plant-parasitic nematodes. With intensive cropping, working the soil when it is too wet or has experienced excessive traffic from heavy equipment will damage the soil. These practices cause soils to become hard and compact, resulting in poor seed germination, loss of transplants, and shallow root formation of surviving plants. Such soils can easily form crusts on the surface, become compacted, which make them difficult to irrigate properly. Combined, these practices will yield negative consequences for your soil; poor plant stands, poor crop growth, low yields, and loss of income. In some cases, sub-soiling in the row might help improve aeration and drainage, but its effect is limited and short term. Continued and dedicated use of cover crops will aid in preventing these conditions. It may take several years of continued use to observe some of the benefits that cover crops can provide to soils. Cover crops can also be planted in strips for wind protection, preceding the planting of the cash crop. Annual rye seeded before November can be a good choice for use in wind protection. Cover crops reduce nutrient loss during the winter and early spring. Cover crops may deplete the soil moisture. If this is a concern, cover crops should be disked or plowed before soil moisture is depleted. Seeding dates suggested in the following section are for the central part of the Southeastern United States and will vary with elevation and northern or southern locations. Vegetable Crop Economics For over 5 years, the United States has reported harvesting over 2 million acres of vegetable crops. This number has been steadily decreasing. In 2016, there were 2.6 million acres of crops harvested versus 2.2 million acres harvested in 2021. The annual market value of vegetable crops has fluctuated up and down, running 13.6 billion dollars in 2016 down to 12.7 billion dollars in 2021. Dig Deeper Vegetable crops | Cabbage | Onion | Pepper | Tomato | Watermelon | https://www.marketnews.usda.gov/mnp/fv-nav-byCom?navClass=VEGETABLES&navType=byComm Kemble et al.’s 2022 Southeast U.S. Vegetable Crop Handbook is a free online resource available from the Southeastern Vegetable Extension Workers Group Attributions 2022 Southeast United States Vegetable Crop Handbook by the SEVEW Group. Copyright © Used with Permission. The Tennessee Vegetable Garden: Garden Planning, Plant Preparation and Planting by Natalie Bumgarner, University of Tennessee. Copyright © Used with Permission. USDA Definition of Specialty Crop by the United States Department of Agriculture is in the Public Domain. Vegetable by Wikipedia is licensed CC BY-SA. Vegetable Statistics by the United States Department of Agriculture National Agricultural Statistics Service is in the Public Domain.
oercommons
2025-03-18T00:35:06.648585
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87610/overview", "title": "Statewide Dual Credit Introduction to Plant Science, Plant Classification and Use", "author": null }
https://oercommons.org/courseware/lesson/87611/overview
4.2 Fruits and Tree Nuts 4.3 Stone Fruit 4.4 Fruit and Tree Nut Production Practices 4.5 Fruit Economics Introduction to Fruit Crops Fruit Crops Overview Title Image Berries Credit: Scott Bauer, United States Department of Agriculture, Agricultural Research Services; Public Domain Did you have an idea for improving this content? We’d love your input. Introduction Lesson Objectives - Identify examples of fruit crops. - Select examples of common fruit crops from the list provided. - Identify common uses of fruit crops. - Select common uses of fruit crops from the list provided. - Explain common production practices for fruit crops. - Describe common fruit crop production practices. - Evaluate the economic impact of fruit crops. Key Terms fruit crops - plants grown to produce sweet and fleshy, seed-bearing food stone fruit - a fruit with flesh or pulp enclosing a stone (peach, plum, etc.) Introduction A bowl of berries is a treat for the eye, as well as a delight for the palate. But these tasty little morsels happen to be quite tricky to grow, harvest, and handle. These crops tend to have brief growing seasons and are vulnerable to insects, disease, and even birds. Fruits are considered delectable treats and bring higher cash value per acre than other crops. Fruits and Tree Nuts Fruit and tree nuts are defined by the United States Department of Agriculture as specialty crops. A nut is a fruit consisting of a hard or tough nutshell protecting a kernel that is usually edible. Included in the list of fruits and tree nuts are almond, apple, apricot, avocado, banana, blackberry, blueberry, breadfruit, cacao, cashew, cherimoya, cherry, chestnut (for nuts), chokeberry, citrus, coconut, coffee, cranberry, currant, date, feijoa fruit, fig, filbert (hazelnut), gooseberry, grape (including raisin), guava, kiwi, litchi, macadamia, mango, nectarine, olive, papaya, passion fruit, peach, pear, pecan, persimmon, pineapple, pistachio, plum (including prune), pomegranate, quince, raspberry, strawberry, Suriname cherry, and walnut. Many seeds from fruits are edible by humans and used in cooking, eaten raw, sprouted, or roasted as a snack food, ground to make nut butters, or pressed for oil that is used in cooking and cosmetics. Stone Fruit A stone fruit, also called a drupe, is a fruit with a large "stone" inside. The stone is sometimes called the seed, but that is a mistake, as the seed is inside the stone. The stones can also be called a pit. These fruits are edible and used frequently in cooking. Peaches, apricots, cherries, and plums are all considered stonefruits and are widely used in culinary dishes. Fruit and Tree Nut Production Practices Fruit and tree nut sites must be selected carefully to ensure they will thrive for many years. Proper site selection and preparation, as well as orchard establishment, lead to good yields for fruit and tree nuts. First, spacing is very important. Next, the amount sunlight available each day can influence plant growth. With less than 10 square feet, a small berry bush would be appropriate. With 10-to-20-square-foot area, a self-pollinating dwarf fruit tree, fig, or persimmon would be appropriate. With more than 20 square feet, a self-pollinating apple, pear, peach, or plum tree can be grown. Some species like pecan trees require as much as 70 square feet of space. Most trees go through pollination to produce fruit so having multiple fruit trees may be necessary if self-pollination is not possible. Fruit trees that require cross-pollination need at least twice as much space to accommodate the two or more different varieties needed to get fruit set up. Pruning for fruiting improves air circulation, increases produce quality, and develops a desirable tree shape. If a fruit or nut tree is planted in a space that’s too small, it must be pruned to contain size rather than to promote fruiting. That kind of pruning will stress the trees, making them more susceptible to insect and disease damage, and reducing the possibility of being productive. Thinning is a process that removes a certain portion of a fruit crop to help the remaining fruit grow to an adequate size and better quality. Thinning can also increase subsequent crop yields for fruits like peaches, apples, pears, plums, and nectarines. Chemical treatments may be necessary for trees that are too large to manage. Berries happen to be quite tricky to grow, harvest, and handle. These crops tend to have brief growing seasons and are vulnerable to insects, disease, and even birds, so Agricultural Research Service scientists have given them lots of attention. Take strawberries. In the 1950's, ARS actually saved the strawberry industry in the Great Lakes region when they released the first varieties that could survive red stele, a root-rotting fungus. Strawberry breeding has a long history in America. ARS came up with such June-bearing favorites as Earliglow, a sweet and juicy berry with a wonderful flavor. There are berries that bear fruit from spring until well into the fall like Tribute and Tristar, which have brought new market opportunities to Northwest strawberry growers. Fifteen years ago, blueberries were practically nonexistent in the Gulf States. But early-ripening varieties have extended highbush blueberry culture to the Deep South. Today, over 10,000 acres are grown in Dixie, with more than 4,000 acres thriving throughout Texas, Louisiana, and Alabama. In the Pacific Northwest, where most of the United States’ red raspberries are grown, Willamette, a variety released in 1943, still accounts for 40 percent of the red raspberry acreage. And, when USDA blackberry breeders introduced the first truly genetic thornless blackberries, Thornfree and Smoothstem, they caused a small roadside revolution. The new varieties were just what some growers needed to establish pick-your-own operations. Fruit Economics The U.S. fruit and tree nuts industry consists of a wide array of crops and products generating, on average, over $25 billion in farm cash receipts annually. Produced on less than 2 percent of U.S. agricultural cropland, farm cash receipts from this sector account for about 7 percent of the total receipts for all agricultural commodities and around 13 percent for all crops. Foreign markets serve as outlets for less than 20 percent of overall U.S. fruit and tree nuts supplies, while nearly half of the available supplies for domestic consumption come from imports. Dig Deeper https://content.ces.ncsu.edu/extension-gardener-handbook/15-tree-fruit-and-nuts The 2020/2021 PennState Tree Fruit Production Guide is a free online resource available from PennState Extension Mark Rieger’s Introduction to Fruit Crops is a free online resource that was developed as an online aid to the class 'Introduction to Fruit Crops' (HORT 3020) at the University of Georgia in Athens. The material is from the book that he wrote for HORT 3020 ('Introduction to Fruit Crops'), a book still used in the class today, and it is reliable as a reference for any internet-based or traditional college class. Attributions Title Image Berries Credit: Scott Bauer, United States Department of Agriculture, Agricultural Research Services; Public Domain Defining "Specialty Crops": A Fact Sheet by the Congressional Research Service is in the Public Domain. Fruit & Tree Nut Overview by the United States Department of Agriculture is in the Public Domain. Image k7229-19 by the United States Department of Agriculture is in the Public Domain. Nut (fruit) by Wikipedia is licensed CC BY-SA. Stone Fruit by Wikipedia is licensed CC BY-SA.
oercommons
2025-03-18T00:35:06.695193
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87611/overview", "title": "Statewide Dual Credit Introduction to Plant Science, Plant Classification and Use", "author": null }
https://oercommons.org/courseware/lesson/87612/overview
5.11 Animal Manures 5.12 90-120 Day Rule 5.13 Compost 5.14 Compost Tea 5.15 Vermicompost 5.16 Processed Animal Manures 5.17 Ratooning 5.18 Soil Conservation 5.19 Cover Cropping 5.20 Organic Mulches 5.21 Conservation Tillage 5.22 Contour Conservation and Strip Cropping 5.23 Mixed Cropping 5.24 Food Forests 5.25 Nutrient Management_Nitrogen and Trace Minerals 5.26 Genetic Engineering 5.2 Crop Rotation in Annual Crops 5.3 Crop Rotation in Perennial Crops 5.4 Biodiversity 5.5 Disease and Pest Management 5.6 Environmental Benefits and Considerations of Crop Rotations 5.7 Pesticide and Fertilizer Use Under Different Crop Rotations 5.8 Alley Cropping 5.9 Soil Building 5_Crop-Biodiversity Crop Biodiversity Overview Title Image: Strips of oats and hay are interspersed with strips of corn to save soil and improve water quality and wildlife habitat on this field in northeast Iowa Credit: United States Department of Agriculture – Natural Resources Conservation Service; Public Domain Did you have an idea for improving this content? We’d love your input. Introduction Lesson Objectives Defend the need for genetic diversity in cropping systems. Identify various cropping systems that promote genetic diversity. Recognize the advantages and disadvantages related to genetic diversity. Key Terms conservation tillage - the minimal use of soil cultivation with crops crop rotation - cycling through planting different types of crops for several years mixed cropping - growing different types of plants in the same land monoculture - a sequence where the same crop is planted for 3 consecutive years polyculture - the process of growing multiple crops in a designated area to mimic the natural environment ratooning - the process of cutting plant stems down to stimulate another round of growth sequential cropping - growing various crops on the same land in different years, one after the other vulnerability - plants’ susceptibility to pests and environmental conditions Introduction Expanding markets, new production technologies, and economic competition in recent decades have resulted in crop specialization, increased purchase of off-farm inputs, and production practices that often have adverse environmental consequences. Monoculture (successively growing the same crop on the same land), continuous row crops, and other intensive cropland uses have increased with the availability of commercial fertilizers to supply nutrient needs and chemicals to control pests. Crop rotations that include hay, grass sod, and other soil-conserving crops were abandoned by many producers as the demand for hay and forages declined. The choice between monoculture and rotating different crops on the same land depends on a broad range of economic and physical factors. And the choice of rotation frequently affects the use of fertilizer and pesticides. Crop Rotation in Annual Crops For producers of annual crops, complying with crop rotation standards is straightforward and often beneficial for crop health. Crop rotation refers to the sequencing of crops over time on a field or planting bed. Rotations typically mean that crops are not followed by a member of the same crop family. Sequential cropping is not unique to organic systems, as it is also practiced by many conventional farmers. However, organic systems are unique in that crop rotation is specifically required in the USDA organic regulations. Crop rotation can • interrupt insect life cycles. • suppress soilborne plant diseases. • prevent soil erosion. • build organic matter. • fix nitrogen. • increase biodiversity of the farm. Crop rotations are an important way to suppress insects and diseases. For example, farmers who raise potatoes will rotate the field out of solanaceous crops for at least 2 years before replanting potatoes. This helps reduce populations of insects, such as the Colorado potato beetle, and prevent diseases, such as late blight. Rotations with 3 to 5 years between the same crop may be needed to effectively reduce insect and disease levels. Rotations also can be designed to increase soil fertility. A crop sequence that features soil-improving crops can counterbalance soil-depleting crops. Soil-improving crops include sod crops dominated by perennial grasses and perennial legumes. Sod crops in rotation build soil organic matter and reverse the decline that typically occurs when cultivated annual crops are grown year after year. Legumes, such as alfalfa, clovers, beans, and peas, are especially beneficial because they fix nitrogen from the atmosphere and make it available to subsequent crops. Even short-term, nonleguminous cover crops can provide benefits when used as part of the crop-rotation plan. The best cover crops are specific varieties adapted to the soil, climate, and season. They are sown at a fairly high rate to cover the soil quickly and prevent erosion. When planning crop rotations, it is important to remember that cultivated row crops, such as vegetables, tend to degrade soil. Since the soil is open and cultivated between rows, microbes break down organic matter at a more rapid pace. Furthermore, row crops have modest root systems and consequently do not contribute enough new organic matter to replace that lost from the open soil between rows. In most cases, above-ground crop residues make only minor contributions to replacing lost organic matter. In contrast, cereals and cover crops are more closely spaced and have more extensive root systems than row crops, greatly reducing the amount of soil exposed to degradation. In addition, these crops receive little or no cultivation after planting, which reduces organic-matter loss even more. As a result, cereals and green manures can be considered neutral crops, replacing soil organic matter at roughly the same rate at which it breaks down. Crops that make a perennial sod cover, such as grasses, clovers, and alfalfa, not only keep the soil entirely covered but also have massive root systems that produce far more organic matter than is lost. Incorporating sod crops as a fundamental part of a crop rotation not only builds soil but also supports weed-control strategies. Weed control improves because the types of weeds encouraged by row-cropping systems are usually adapted to growing in a sod/hay crop. To make the most efficient use of sod crops, it is necessary to include livestock in the system or to find a market for the hay. Livestock will assist in transferring (via manure) nutrients from one part of the farm to another. The major drawback to selling hay is that the nutrients it contains are shipped off the farm. Crop Rotation in Perennial Crops For producers of organic perennial crops, the requirement for crop rotation can be confusing. Farmers should implement practices that will maintain soil organic matter, control pests, conserve nutrients, and protect the soil against erosion. For growers of annual crops, those practices typically include crop rotation, but other practices can be substituted if rotation is not practical. Some perennials will be part of a long-term crop rotation, which may last a few years or even decades. Asparagus, for example, is a perennial that can be productive for 15 years or more. When a field is taken out of asparagus production, it is typically planted with another crop to reduce the incidence of soilborne disease. That practice is considered a long crop rotation. Several other perennials, such as strawberries, Echinacea, and lavender, are not required to have a cover crop because they are typically part of a long crop rotation. Other types of perennials—those that will not be part of a crop rotation—may require additional practices to ensure soil conservation and biodiversity in the cropping system. This is important with large perennials, such as trees, that have large drive rows between the crop rows. For example, organic farmers must have a cover crop (often grass) between the rows of trees in an orchard. Crops that are required to have a cover crop between crop rows include caneberries, grapevines, blueberries, tree fruits, and nut trees. Some perennial crops, such as alfalfa, develop a canopy that covers the ground and prevents soil erosion. Such crops are not required to be rotated with other crops. Biodiversity Many organic farmers actively manage their farms to increase biodiversity, due to its many benefits. Biodiversity plays a particularly crucial role in pest management. Although farmers are encouraged to have diverse systems, there are no specific requirements, standards, or monitoring practices. Diverse agricultural systems support strong populations of predators and parasites that help keep pest populations at manageable levels. This approach is proactive rather than reactive because a diverse system reaches an equilibrium that prevents pest outbreaks from becoming too severe. Birds and bats can keep insect populations low. Raptors can scare away fruit-eating birds. Coyotes, owls, and foxes can keep rodent populations under control. These animals can be encouraged to improve the plants' vulnerability because plants are providing needed shelter, water, and habitat. Organic producers increase biological diversity in the plant canopy by planting a diversity of crops and plant varieties in any given season. Use of cover crops and hedgerows also adds biodiversity. The diversity of vegetation, combined with reduced use of broad-spectrum pesticides, increases the diversity of insects and spiders in the plant canopy. Introducing beneficial insects and providing habitat for them to become established will increase biodiversity. To promote biodiversity in the soil, it is helpful to minimize tillage, introduce microorganisms in compost, and avoid broad-spectrum pesticides. These practices will increase the variety of bacteria, fungi, and invertebrates in the soil. Disease and Pest Management In many field crop and vegetable systems, maintaining a diverse, healthy ecosystem and using well-timed cultural practices are sufficient for pest management. Pests may not be eliminated, but damage levels are low enough to be tolerated. Organic producers maintain that organic soil-building practices will result in crops that are properly nourished and thereby less susceptible to attack by pests and diseases. Natural biological pest control arises in a healthy organic system in the form of an active complex of natural predators and parasites that suppress pest populations. Incorporating habitat and food sources for beneficial insects into the farm, known as farmscaping, can provide long-term benefits. Environmental Benefits and Considerations of Crop Rotations Crops face danger of extensive damage or destruction from a variety of sources including weeds, pests, diseases, adverse environmental conditions, and unfavorable weather. Potential crop yields can be seriously restricted by a lack of crop protection.. Planned crop rotations can increase yields, improve soil structure, reduce soil loss, conserve soil moisture, reduce fertilizer and pesticide needs, and provide other environmental and economic benefits. However, crop rotations may reduce profits when the acreage and frequency of highly profitable crops are replaced with crops earning lower returns. Many crop rotations reduce soil loss and are an option for meeting conservation compliance on highly erodible land. The growth of hay, small grain crops, or grass sod in rotation with conventionally tilled row crops reduces the soil’s exposure to wind and water and decreases total soil loss. While beneficial, crop sequencing can be complex and require more knowledge about plants and growing (Figure 5.5.2). These rotations, however, are a desirable option to farmers only when profitable markets exist or the conservation crops can be utilized by on farm livestock enterprises. Alternating wheat and fallow is a common practice for conserving soil moisture in regions with low rainfall. Applying tillage practices to minimize evaporation or transpiration from idle land in one season increases the amount of stored soil moisture available for the crop in the following season. The ability of legume crops to fix atmospheric nitrogen and supply soil nitrogen needs for subsequent crops is well documented. The plowdown of established alfalfa or other legumes can provide carryover nitrogen for a crop that requires high levels of nitrogen, such as corn. Research has shown that soybeans can be managed to fix 90 percent of their nitrogen needs and provide a soil nitrogen credit of 20 pounds or more per acre for a subsequent crop (Heichel, 1987). However, soybeans grown in rotation with corn where soils are already rich in nitrogen have not been shown to fix significant amounts of nitrogen. Crop rotations affect pest populations and can reduce the need for pesticides. Different crops often break pest cycles and prevent pest and disease organisms from building to damaging levels. Treatment for corn rootworm, the most common insecticide treatment on corn, normally only requires alternating another crop to sufficiently reduce root-worm survival rates to levels that do not require insecticide treatment. Hay and grass sod grown in rotation with corn, however, may increase the need for other corn insecticides to treat other pests. Besides providing erosion control, small grains, hay, and grass sod are competitive with broadleaf weeds and may help control weed populations in subsequent crops. These crops are usually harvested or can be cut before weeds reach maturity and produce seed for germination the following season. Weeds on prior idle acres or fallow land may be controlled by either cutting or tilling to reduce weed infestations the following year. Sometimes, herbicides are used to kill existing vegetation on idle land (chemical fallow) in lieu of mechanical methods. Rotations also can reduce financial risk and provide a more sustainable production system. Since adverse weather or low market prices are less likely to affect all crops simultaneously, the diversity of products resulting from crop rotation can reduce risk. Pesticide and Fertilizer Use Under Different Crop Rotations Crop rotation is often key to a sustainable agricultural production system and can reduce the need for fertilizer and pesticides. Fertilizer applications are often adjusted for prior nitrogen-fixing crops. Fewer pesticides may be needed when rotations break pest cycles or reduce infestation levels. Alley Cropping Alley cropping is defined as the planting of rows of trees and/or shrubs to create alleys within which agricultural or horticultural crops are produced. Alley cropping systems are sometimes called intercropping, especially in tropical areas. The trees produced through alley cropping may include valuable hardwood veneer or lumber species; fruit, nut or other specialty crop trees/shrubs; or desirable softwood species for wood fiber production. As trees and shrubs grow, they influence the light, water, and nutrient regimes in the field. These interactions are what sets alley cropping apart from more common monocropping systems. Alley cropping can vary from simple systems, such as an annual grain rotation between timber tree species, to complex multilayered systems that can produce a diverse range of agricultural products. It is especially attractive to producers interested in growing multiple crops on the same acreage to improve whole-farm yield. Growing a variety of crops in close proximity to each other can create significant benefits to producers, such as improved crop production and microclimate benefits and help them manage risk. Soil Building For centuries before the advent of chemical fertilizers, farmers supplied all the nutrients for their crops solely by adding organic matter to the soil. As fresh organic matter, such as crop residues, decomposes, it forms a stable substance called humus. Organic matter can be added to soils with compost, animal manures, or green manures. Adding organic matter is a fundamental way to build soils. Organic matter provides food for microorganisms, such as fungi and bacteria, and macroorganisms, such as earthworms. As these diverse soil organisms decompose organic matter, they convert nutrients into forms that are available to plants. Soils high in organic matter also have improved water-holding capacity, helping plants resist drought. Green Manures Green manures are crops grown specifically for soil improvement. They are typically incorporated into the soil after they have produced a large amount of biomass or fixed a significant amount of nitrogen in the case of legumes. Managing green manure crops to increase organic matter and provide the maximum amount of nitrogen to the following crop is both an art and a science. Annual grasses, small grains, legumes, and other useful plants like buckwheat can be inserted into the cropping sequence to serve as green manures. Their roots pull nutrients from deeper soil layers, and the tops are plowed into the soil to add organic matter and a stable source of nutrients. In particular, deep tap-rooted crops such as alfalfa, sweet clover, rape, and mustard are known to extract and use minerals from the deeper layers of soil. Legumes add nitrogen to the soil. Nitrogen accumulations by leguminous cover crops can range from 40 to 200 pounds of nitrogen per acre. The amount of nitrogen captured by legumes depends on the species of legume grown, the total biomass produced, and the percentage of nitrogen in the plant tissue. Cultural and environmental conditions that limit legume growth—such as a delayed planting date, poor stand establishment, and drought—will reduce the amount of nitrogen produced. Conditions that favor high nitrogen information production include a good stand, optimum soil nutrient levels and soil pH, good nodulation, and adequate soil moisture. Animal Manures Conservation of manure and its proper application are key means of recycling nutrients and building soil. Farms without livestock often buy manure or compost because they are considered to be among the best fertilizers available, though sole reliance on fertilizers from other farms can have its drawbacks like cost, availability, and transportation. Manures from conventional systems are allowed in organic production, including manure from livestock grown in confinement and from those that have been fed genetically engineered feeds. Manure sources containing excessive levels of pesticides, heavy metals, or other contaminants may be prohibited from use. Such contamination is likely present in manure obtained from industrial-scale feedlots and other confinement facilities. Certifiers may require testing for these contaminants if there is reason to suspect a problem. Herbicide residues have been found in manures and manure-based composts. One type—aminopyralid—is used in pastures for control of broadleaf weeds. Grass and corn are not affected by the herbicide, and cows are not affected when they eat the grass or silage. However, the herbicide can be present in their manure in concentrations high enough to stunt the growth of tomatoes, peppers, and other susceptible broadleaf crops. If a manure source is suspected of being contaminated with excessive amounts of prohibited substances, appropriate testing should be conducted. If test results indicate that the manure is free of excessive contamination, and it is subsequently used in production, the test results should be kept on file. Used properly, manures can replace all or most needs for purchased fertilizer, especially when combined with a whole-system fertility plan that includes crop rotation and cover cropping with nitrogen-fixing legumes. Manure is typically applied just ahead of a crop requiring high fertility, such as corn or squash. Manures also can be applied just prior to a cover crop planting. Incorporating the manure as soon as possible after application, rather than allowing it to remain on the soil surface, will conserve the maximum amount of the nitrogen. Although manure is an excellent fertilizer for crops, and it has been used that way for centuries, manure may harbor microorganisms that are pathogenic to humans. To minimize the possibility of illness due to organic foods, there are strict regulations on the use of manure in organic crops. 90-120 Day Rule Application of manure to organic crops is restricted by what is known as the 90–120-day rule, as described in § 205.203(c)(1): “You may not apply raw, uncomposted livestock manure to food crops unless it is: 1. Incorporated into the soil a minimum of 120 days prior to harvest when the edible portion of the crop has soil contact; OR 2. Incorporated into the soil a minimum of 90 days prior to harvest of all other food crops.” Incorporation is generally assumed to mean mechanical tillage to mix the manure into the soil. This is important for crops that have soil contact which include leafy greens, melons, squash, peas, and many other vegetables. Any harvestable portion of a crop that can be splashed with soil during precipitation or irrigation might be considered to have soil contact. Crops that do not have soil contact include tree fruits and sweet corn. The 90- and 120-day restrictions apply only to food crops; they do not apply to fiber crops, cover crops, or to crops used as livestock feed. Compost Perhaps no other process is more closely associated with organic agriculture than composting. Composting is one of the most reliable and time-honored means of conserving nutrients to build soil fertility. Because matured, well-made compost is a stable fertilizer that will not burn plants and because composting kills most human and plant pathogens, compost can safely be used as a side-dress fertilizer on food crops. Animal manures used in organic crop production often are composted before use, in part because some types of raw manure will burn plants if applied directly to crops. Composting reduces the number of viable weed seeds, creates a uniform product with predictable nutrient levels, and eliminates worries about human pathogens. If manures are composted according to USDA organic regulations, then they are considered compost, not manure, and may be applied without restrictions. If manure is aged but not composted according to the regulations, then the material is still considered manure and must be applied in accordance with the 90–120-day rule explained above. The composting procedures are adapted from U.S. Environmental Protection Agency (EPA) and USDA’s Natural Resources Conservation Service (NRCS) guidelines for composting biosolids. This policy was established to ensure the elimination of pathogens that cause illness in humans. The regulations define compost as “the product of a managed process through which microorganisms break down plant and animal materials into more available forms suitable for application to the soil...” Compost used in organic production must be made according to the criteria set out in § 205.203(c)(2). This section of the regulations specifies that: - “The initial carbon: nitrogen ratio of the blended feedstocks must be between 25:1 and 40:1. - The temperature must remain between 131 °F and 170 °F for 3 days when an in-vessel or a static-aerated-pile system is used. - The temperature must remain between 131 and 170°F for 15 days when a windrow composting system is used, during which period the windrow must be turned at least five times.” Organic farmers often maintain a compost pile on the farm as an efficient and cost-effective way to retain nutrients on the farm and build soil. If compost feedstocks include raw manure, they must be composted in the method detailed above. This composting process must be explained in a system plan and documented with temperature records. If those requirements are not met, then the resulting compost must be applied according to the 90-120-day raw manure rule. If compost feedstocks do not include raw animal manures, then the resulting compost is considered plant material and there are no restrictions on its use. Compost Tea Some organic farmers apply compost teas to crops or soil to increase the populations of beneficial microbes. If compost tea will be applied to organic crops, it is critical that the compost used to produce the extract has been made according to USDA organic regulations. The procedures for making both the compost and the compost tea must be explained in your OSP. Applications of teas made from uncomposted manure must follow the 90-120-day rule. The tea extract may need to be tested to ensure that it is free of dangerous pathogens, particularly if the tea has been made with compost tea additives. The additives, such as molasses, provide nutrients for microbes and thereby increase their rate of growth. There is some concern that any human pathogens present will grow more abundantly in a tea made with these additives. Further details on the recommendations for the use of compost tea are available in the NOP publications listed at the end of this chapter. Vermicompost Vermicompost is compost that uses worms to digest the feedstocks. Since feedstocks may include animal manures, there has been debate as to whether the 90-120-day rule should apply to vermicompost. The NOP has issued the following guidance: feedstocks for vermicompost materials may include organic matter of plant or animal origin. Feedstocks should be thoroughly macerated and mixed before processing. Vermicomposting systems depend upon regular additions of thin layers of organic matter at 1- to 3-day intervals. Doing so will maintain an aerobic environment and avoid temperature increases above 35 °C (95 °F), which will kill the earthworms. The composting process must be described in the OSP, reviewed by the certifier, and well documented on the farm. Further details are available in the NOP publications listed at the end of this chapter. Processed Animal Manures Heat-treated, processed manure products may be used in organic production. There is no required interval between application of processed manure and crop harvest. From the standpoint of the farmer, of course, these inputs would be applied well before harvest, so that the nutrients would be available to the crop. To be considered processed, the manure must be heated to 150 °F for 1 hour and dried to 12 percent moisture or less. Ratooning Ratooning is a production practice that is sometimes used on plants like sugarcane and okra. The process involves cutting stems down in mid-summer. Plants are then fertilized after being ratooned to support plant growth. This process rejuvenates the plant to stimulate another round of harvest on new growth in the later summer to early fall and is common in commercial growing. Soil Conservation Careful conservation and management of crop residues is part of organic soil management, since this residue plays a valuable role in improving and protecting the soil. The key to soil conservation is to keep the ground covered for as much of the year as possible. Organic farmers have long recognized the value of basic soil conservation. There are many practices that help conserve soil, including cover crops, mulches, conservation tillage, contour plowing, and strip cropping. Since water erosion is initiated by raindrop impact on bare soil, any management practice that protects the soil from raindrop impact will decrease erosion and increase water entry into the soil. Mulches, cover crops, and crop residues all serve this purpose well. A major limitation of organic row-crop farming is that cultivation is used for weed control, since herbicides are not allowed. This cultivation creates and maintains bare ground, which increases the likelihood of soil erosion. By contrast, soil that is covered with an organic mulch of crop residue, such as that typically found in no-till fields, is less likely to erode. Organic no-till systems have yet to be perfected for annual row crops, but they work well for perennial fruit crops and pasture, allowing for year-round ground cover and virtually no soil erosion. Cover Cropping Cover crops are single species or mixtures of plants grown to provide a vegetative cover between perennial trees, vines, or bushes; between annual crop rows; or on fields between cropping seasons. The vegetative cover on the land prevents soil erosion by wind and water, builds soil fertility, suppresses weeds, and provides habitat for beneficial organisms. Cover crops also can help reduce insect pests and diseases, and legume cover crops fix nitrogen. Any crop grown to provide soil cover is considered a cover crop, regardless of whether that crop is later incorporated into the soil as a green manure. Both green manures and other types of cover crops can consist of annual, biennial, or perennial herbaceous plants grown in a pure or mixed stand during all or part of the year. When cover crops are planted to reduce nutrient leaching following a cash crop, they are termed “catch crops.” This type of cover crop is typically grown over the winter when the field would otherwise be unoccupied. Organic Mulches Organic mulches cover the soil and provide many of the same benefits as cover crops, especially the prevention of soil erosion. Many organic materials—such as straw, leaves, pine needles, and wood chips—can be effective mulches. Straw and other materials that are easily decomposed are applied to strawberries and vegetables during the growing season. The mulch can be tilled in at the end of the season, where it will quickly decompose. Wood chips, because they decompose very slowly, are more commonly applied to perennial crops such as blueberries, where they will not be tilled in. Applying organic mulch can be labor-intensive. Tree fruit growers sometimes mow the drive rows and blow the green clippings into the tree rows, which automates the mulching process. Heavy mulches can be a benefit by suppressing weed growth, or a nuisance by providing a haven for slugs. Organic mulches keep the soil cool, which may be a boon for blueberries in hot climates and a drawback for tomatoes in cool spring weather. Organic mulches have a beneficial long-term effect because they add nutrients to the soil as they decompose. Mulches of high-carbon material may have the opposite effect because they tie up nitrogen during the decomposition process. However, this should not be a problem if mulches are used properly—that is, placed on top of the soil, and not incorporated. Conservation Tillage In conservation tillage, crops are grown with minimal soil cultivation. This is also known as no-till, minimum till, incomplete tillage, or reduced tillage. When the amount of tillage is reduced, the residues from the plant canopy are not completely incorporated into the soil after harvest. Crop residues remain on top of the soil and prevent soil erosion, a practice known as crop residue cover. The new crop is planted into this stubble or small strips of tilled soil within the stubble. Contour Conservation and Strip Cropping Slope plays a role in soil conservation, in that flat ground erodes less than sloping ground with equal amounts of ground cover. Contour plowing is the practice of plowing across a slope following its elevation contour lines, rather than straight up and down the slope. The cross-slope rows formed by contour plowing slow water runoff during rainstorms to prevent soil erosion. Strip farming, also known as strip cropping, alternates strips of closely sown crops, such as hay or small grains, with strips of row crops, such as corn, soybeans, or cotton. Strip farming helps prevent soil erosion by creating natural dams for water, helping to preserve the soil. Mixed Cropping The growing of several crops simultaneously in the same field but not in rows is called mixed cropping. Mixed cropping, including intercropping, is the oldest form of systemized agricultural production and involves the growing of two or more species or cultivars of the same species simultaneously in the same field. However, mixed cropping has been little by little replaced by sole crop systems, especially in developed countries. Some of the advantages of mixed cropping are, for example, resource use efficiency and yield stability, but there are also several challenges, such as weed management and competition. Food Forests Modern agriculture has leaned heavily on monoculture field cropping. Many have found polyculture to be a natural solution for modern issues like soil water conservation, nutrient deficiencies in soil, and disease and pest management. Trees can provide many benefits in gardens and in urban environments. They produce fruit, like apples, peaches and figs, and also provide shade and wildlife habitat. Food forests support forest ecosystems and connect communities with nature. Trees of different sizes produce nuts and fruit, while their shade can support a variety of fresh, flavorful mushrooms, herbs, and berries. Trees improve air quality and help soil retain water. Nutrient Management: Nitrogen and Trace Minerals Although organic matter plays an important role in building productive soils, there are specific crops and soil types that will benefit from additional applications of specific nutrients. Organic farmers are allowed to use a variety of fertilizers to provide micronutrients to their crops. Before applying micronutrients, soil deficiencies must be documented through soil tests, plant tissue tests, observing the condition of plants, or evaluating crop quality at harvest. Nitrogen is often a limiting nutrient, especially for vegetables and other row crops. Including legumes in the rotation can help to ensure sufficient nitrogen for the following crop. Biological nitrogen fixation in legumes results from a symbiotic relationship between the plant and Rhizobium bacteria. These bacteria “infect” the roots of legumes, forming nodules. The bacteria then fix nitrogen from the air, which results in sufficient nitrogen both for their own needs and for subsequent crops. The inoculation of legume seed may be necessary to optimize nitrogen fixation. It is important to purchase an inoculant appropriate to the kind of legume being planted to ensure it is not genetically modified. Genetically modified inoculants are prohibited in organic production. Genetic Engineering The planting of GM crops is regulated—new varieties may not be widely planted until they’ve been approved by USDA. If conventional seed is planted, the certifier will request proof that it is not genetically engineered. This verification is becoming more important each year, as the number of genetically modified (GM) crops increases. The use of GM seeds is prohibited in organic agriculture, and it is the responsibility of organic growers to make certain that the crops they grow are not genetically engineered. GM crops that are now being planted or will soon be available include alfalfa, beets, corn, soybeans, papaya, plum, rapeseed, tobacco, potato, tomato, squash, cotton, and rice. This list is expected to change, as genetically engineered versions of several other crops have been developed but have not yet been released for commercial production. The most current information about GM crops is maintained by the USDA Animal and Plant Health Inspection Service (APHIS). Seed companies that develop a new variety of genetically modified seeds must submit a petition to APHIS before that seed can be distributed to the public. Genetic engineering is considered an excluded method and is defined as a variety of methods used to genetically modify organisms or influence their growth and development by means that are not possible under natural conditions or processes and are not considered compatible with organic production. Such methods include cell fusion, microencapsulation and macroencapsulation, and recombinant DNA technology (including gene deletion, gene doubling, introducing a foreign gene, and changing the positions of genes when achieved by recombinant DNA technology). Such methods do not include the use of traditional breeding, conjugation, fermentation, hybridization, in vitro fertilization, or tissue culture. With certified organic production, if it is necessary to use conventional seeds, it is essential to verify that the variety has not been genetically engineered and to keep documentation of this verification, as your inspector will ask to see it. Seed companies that have taken the Safe Seed Pledge may be convenient sources of non-GMO seeds. The Safe Seed Pledge was developed by the Council for Responsible Genetics and has been signed by numerous seed companies. Dig Deeper Attributions Title Image: Strips of oats and hay are interspersed with strips of corn to save soil and improve water quality and wildlife habitat on this field in northeast Iowa, Credit: United States Department of Agriculture – Natural Resources Conservation Service; Public Domain 4.2 Crop Rotations by the United States Department of Agriculture is in the Public Domain. Alley Cropping by the United States Department of Agriculture is in the Public Domain. Guide for Organic Crop Producers by the United States Department of Agriculture is in the Public Domain. "Sustainable Mixed Cropping Systems for the Boreal-Nemoral Region" by Lizarazo, et. al. is licensed CC BY 4.0. Trees and Food Forests by the United States Department of Agriculture is in the Public Domain.
oercommons
2025-03-18T00:35:06.862031
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87612/overview", "title": "Statewide Dual Credit Introduction to Plant Science, Plant Classification and Use", "author": null }
https://oercommons.org/courseware/lesson/92888/overview
6.11 Profitability 6.12 Return on Investment 6.13 Depreciation Theory 6.14 Depreciation for Income Tax Purposes 6.15 Deprecitation for Management Purposes 6.16 Depreciation Based on Use 6.17 Depreciation and Relationship to Cash Flow 6.2 Economic Impact of Agriculture 6.3 Cost of Production 6.4 Calculating Estimated Crop Yield 6.5 Balance Sheet 6.6 Assets 6.7 Depreciation 6.8 Liabilities 6.9 Owner's Equity 6_Basic-Crop-Accounting Exercise 5a Selective Breeding and Bioengineering Exercise 5b Plant Identification and Uses Basic Crop Accounting Overview Economics is USDA’s Helping Science by the United States Department of Agriculture is in the Public Domain. Did you have an idea for improving this content? We’d love your input. Introduction Discussion Topic* What are the current prices of the fruit and vegetables in the title image? What factors create the sale price? Lesson Objectives Evaluate the economic impact of field crops, forage crops, vegetable crops, and fruit crops. Explain the significance of ROI to investment decision making. Evaluate the role of depreciation in evaluating production of crops. Key Terms asset - items of value that a farm owns or uses cost of production - the total dollar amount of inputs related to a specific crop debt - money that is owed depreciation - the non-cash expense related to the loss of value of an asset expense summary - a list of all financial contributions to the business liabilities - an obligation to pay a debt liquidity - the ability to meet the short-term cash needs of a farm owner equity - the difference between a farm’s assets and its liabilities return on investment (ROI) - the amount of money gained in relation to the amount invested solvency - the ability to repay the money loaned if a farm stopped doing business today Introduction Tennessee agriculture includes a diverse list of livestock, poultry, fruits and vegetables, row crop, nursery, forestry, ornamental, agritourism, value added and other nontraditional enterprises. These farms vary in size from less than a quarter of an acre to thousands of acres, and the specific goal for each farm can vary. For example, producers’ goals might include maximizing profits, maintaining a way of life, enjoyment, transitioning the operation to the next generation, etc. Regardless of the farm size, enterprises and objectives, it is important to keep proper farm financial records to improve the long-term viability of the farm. Accurate recordkeeping and organized financial statements allow producers to measure key financial components of their business such as profitability, liquidity and solvency. These measurements are vital to making knowledgeable decisions to achieve farm goals. Economic Impact of Agriculture Economics is the study of using resources to produce goods and services as effectively and efficiently as possible to satisfy the needs and wants of consumers. In agriculture, the producer of goods or services may be an agribusiness firm manufacturing a food product that meets the desires of consumers, or agricultural producers growing a crop to meet the needs of a food processor. To produce a product (a good or service), a business needs resources, such as labor (i.e., workers), land (e.g., a building), equipment, cash (capital) and other resources. Restated: to operate a business, the manager needs resources, and one of the manager's responsibilities is to decide which resources to use and how to use them. The United States produces and sells a wide variety of agricultural products across the Nation. In terms of sales value, California leads the country as the largest producer of agricultural products (crops and livestock), accounting for almost 11 percent of the national total, based on the 2012 Census of Agriculture. Iowa, Texas, Nebraska, and Minnesota round out the top five agricultural-producing States, with those five representing more than a third of U.S. agricultural-output value. U.S. fruit and tree nut value of production has increased steadily over the past decade, while the value of vegetable production has been more stable. Grapes, apples, strawberries, and oranges top the list of fruits; tomatoes and potatoes are the leading vegetables. Tree-nut value rose dramatically to record levels of around $10 billion in recent years (Figure 5.6.1), with crop value for most major tree nut crops led by almonds, walnuts, and pistachios achieving historical highs. Cost of Production Cost of production is the dollar value of all your inputs for growing a specific crop. For example, to produce an acre of tomatoes, these inputs would include so many units of seed, fertilizer, irrigation water, labor and machinery time, etc. Each of these units has a dollar value. Add them up, and you have the cost of production for the crop. Knowing the production costs of your crops is a prerequisite for determining how well your farm business is doing: the difference between the value of yield per acre and inputs value. It enables you to evaluate how efficiently resources are being used in your farm operations, to predict how your business will respond to specific changes, and how to make other useful decisions for attaining your goals. Estimating costs is easy in some instances and more difficult in others. Assigning costs is more straightforward for those inputs or raw materials you purchase for a single production period. If you use 20 pounds of fresh tomato seed an acre at $0.80 per pound, your seed cost is $16 (the seed quantity multiplied by its price). Costs for fertilizer, pesticides, irrigation water, and hired labor can be determined the same way. Production expenses that aren't itemized also are included in this category as miscellaneous expenses. These can include entries to cover expenses such as office use, supplies, bookkeeping, and legal fees. The name for a cost category is determined by its contents. For example, "direct operating costs" indicates that values of items included in the category are straightforward and used only in the production of one specific crop. A "variable cost" category means that its values can fluctuate, depending upon the amount of input used. Another cost category is that of "imputed costs." In this category are costs for interest charges, insurance, depreciation and taxes. Interest charge is the cost of your money that is tied up in the production of a crop. It reflects the amount of money you pay on borrowed money or that amount you could have earned had you invested your own resources in alternative uses in the market. Interest on operating capital is calculated using the current interest rate. In the attached cost example, an annual interest charge of 15 percent or 1.25 percent per month is assumed. Interest charge on operating costs is calculated as follows: (Total cash operating expense for the month) x (The number of months the capital is used) x (Interest charge) The number of months the capital is used begins when the operating capital is invested and ends when it is recovered (usually the harvesting period or sale month for the crop). For example, if your fertilization and weed control operations are done in April, your interest charge for these expenses will cover 5 months, assuming August is the recovery or sale time. Thus, interest charge is calculated: $(40 + 40) * 5 * .0125 = $5 Note: .0125 = 1.25% or 1.25/100 The same procedure is used to determine other operating expenses. Interest of investment is charged at the current annual interest rate of the average investment and is calculated as follows: Interest on investment/acre = (Investment cost)/(2 X No. of acres) * Annual Interest Rate Note: Investment cost is the Average investment per acre * Average investment per acre If your investment for machinery, equipment and irrigation system amounts of $102,700 and your farm is 40 acres, your investment interest charge per acre will be: (102,700)/(2*40) * .15 = $192 Note: .15 = 15% or 15/100 The purpose of insurance is to cover the risk of having farm machinery or irrigation equipment destroyed or stolen. A charge of 0.5 to 1 percent of the average investment generally is sufficient. Insurance per acre at 0.5 percent is: (102,700 / 2 * 40) * .005 = 6 Note: .005 = .5% or 5/100 The other imputed cost item is depreciation. Depreciation can be calculated in various ways for various purposes. Fast write-off techniques can be used on the original cost of machinery for income tax purposes. However, for continued production, the machinery needs to be replaced. In such cases, depreciation reflects the cost of replacement and is based on the current value of the machinery. The straight-line method is the simplest and the most straightforward way of calculating depreciation. Simply divide the current cost of the machine by its useful life. Following is an example of a depreciation schedule. Since the purpose of the attached schedule is to serve as a guideline, an attempt has neither been made to provide exact machinery current costs nor a complete list of machinery complement. Current machinery values can be obtained from local dealers or up-to-date publications. Direct costs of plants can be plant material, hard material, material sales tax, direct labor, casual labor, equipment applied, equipment rental, subcontracts, and more. Indirect expenses can be extensive such as bad debt expense, bidding expense, benefit labor, indirect labor, replacement labor, supervision wages, premium compensation, payroll taxes, workers compensation insurance, job travel and lodging, replacement material, safety expenses, self-insurance, small tools, supplies, trash removal and uniform expenses. Calculating Estimated Crop Yield Anticipating expenses and revenue can be a useful tool in managing finances. Grain yield can be estimated prior to harvest. Remember this is just an estimate as field conditions are rarely uniform. The general formula to estimate grain yield is: For 7” row spacing: Wheat: Grain yield (bu/acre) = (kernels per spike x spikes per 3 ft of row) x 0.0319 Barley: Grain yield (bu/acre) = (kernels per spike x spikes per 3 ft of row) x 0.0389 Oats: Grain yield (bu/acre) = (kernels per spike x spikes per 3 ft of row) x 0.0504 To adjust to other row spacing: 6 inch row width = multiply grain yield estimate by 1.17 7.5 inch row width = multiply grain yield estimate by 0.93 10 inch row width = multiply grain yield estimate by 0.70 12 inch row width = multiply grain yield estimate by 0.58 Determining Yield Potential By Assessing Ears at or Near Dent Stage The following procedures can be used to assess potential yield in corn. This assessment is best done at or near dent stage so that you can identify the kernels at the tip of the cob that will fill and reach maturity. Step 1: Determine the number of plants in 1/1000th of an acre Mark off a section of row representing 1/1000th of an acre. Table 1 shows the row length required to do this at different row spacing. Count the number of plants with productive ears in this area. Step 2: Determine the number of kernels on each ear. Select ears from five consecutive plants at five different locations in the field. For each ear, count the number of rows around the ear (Figure 5.6.2). Select one or two rows of kernels and count the number of kernels from the base to the tip of the ear (Figure 5.6.3). Do not count the first kernels at the base of the ear or very small kernels at the ear tip. Multiply the number of rows on the ear by the kernels per row to determine kernel number per ear. Average the kernel number per ear across all twenty-five ears selected from the field. Step 3: Determine corn yield potential Use the following equation to determine yield in bushels per acre: (Plants per 1/1000th of an acre X average number of kernels per ear) / 90 The denominator (90) represents the average number of dry kernels in a bushel of corn and considers the fact that you have measured 1/1000th of an acre. If the kernels are bigger than normal then you could consider dividing by 85 or if the kernels are smaller than normal you could divide by 105. In the ear examples above the equation (based on a plant population of 33,000 plants per acre) is: (33 X 20 X 28) / 90 = 205.3 bushels per acre (Figure 5.6.4). Row length needed to measure 1/1000th of an acre for determining plant populations at different row spacing. Balance Sheet A balance sheet is a financial statement that shows a detailed list of all assets, liabilities and the owner's equity position of the farming operation at a specific point in time. To begin constructing a balance sheet, we need to first start with the standard accounting equation: Total Assets = Total Liabilities + Owner’s Equity The balance sheet is designed with assets on the left-hand side and liabilities plus owner’s equity on the right-hand side. This format allows both sides of the balance sheet to equal each other. After all, a balance sheet must balance. A change in liquidity, solvency and equity can be found by comparing balance sheets from two different time periods. Typically, changes in the balance sheet measurements are analyzed for the operation’s fiscal year (i.e., January 1 to December 31); however, these values can be compared for any time interval. A change in owner’s equity occurs from two sources: 1) income or loss from operations; and/or 2) a change in the value of an asset or liability. Changes in owner’s equity can indicate whether the farm is heading in a profitable direction. However, the balance sheet must be analyzed in conjunction with the income statement to determine profitability. The income statement summarizes revenue and expenses and is used to compute profit over a period of time. An expense summarygives a comprehensive look at all incoming revenue. Assets Assets are items of value that a farm owns or uses. Assets are generally split into two categories: current and noncurrent. A current asset is either cash (or cash equivalents) or an item that will become cash within a fiscal year (12 months). A noncurrent asset is something the farm owns or uses that will not turn into cash within the next accounting period and typically has a multiyear useful life. Some balance sheets further divide noncurrent assets into intermediate and long-term assets. In general, an intermediate asset is an asset with a useful life of one to 10 years (e.g., a tractor), while a long-term asset has a useful life of greater than 10 years (e.g., land) (Holland, 1997). Noncurrent Asset Valuation Assets can be valued using two different approaches: cost value and market value. Cost valuation, sometimes referred to as book value, is the original price paid for the asset minus the accumulated depreciation of that asset. Because the cost method takes into consideration depreciation, a producer can examine changes in the farm owner’s equity (net worth) and the overall invested capital performance (Langemeier, 2017). Market valuation is an estimate of what the asset would sell for on the date of the balance sheet. This valuation considers current prices, meaning the asset is valued based on what a buyer would pay at a specific point in time. For example, the market value for a tractor might be the trade-in value or what it could sell for at auction. The market value approach is important because it provides an estimate of what the farmer would actually receive for an asset if it was liquidated that day (sale proceeds could be less due to transaction costs and contingent liabilities). When selling costs are taken out of the market value, a farmer then has a clear picture of the cost or gain of that asset disposal (Langemeier, 2017). Depreciation There are two common methods in which assets can be depreciated: straight-line depreciation and declining balance depreciation. Straight-line depreciation is when an asset is depreciated by the same amount each year. It is also the simplest type of depreciation to calculate. The equation for the straight-line depreciation method is (Warren, 2013): Annual Depreciation = Original Cost – Salvage Value Useful Life Declining balance depreciation is a method in which an asset depreciates rapidly in the first few years, and then the annual depreciation expense, in dollar terms, becomes smaller the closer the asset gets to reaching the end of its useful life. For the example depicted in Appendix 3, initially, the depreciation rate for the declining balance method is double the straight-line depreciation rate (20 percent compared to 10 percent ~ not reflective of current tax rules). The graph below (Figure 5.4.1) illustrates the two depreciation methods. With straight-line depreciation, the asset depreciates steadily (by the same amount each year) until it reaches its salvage value. With the declining balance method, the asset depreciates more rapidly initially but then slowly depreciates until the end of its useful life (Figure 5.6.5). In this example, both methods produce the same amount of accumulated depreciation at the end of the useful life of the asset. Depreciation of property used in the course of a farming business is allowed as a tax deduction for taxpayers. Liabilities An obligation to pay a debt is known as a liability. Just like assets, the liabilities section of the balance sheet can be separated into two sections (current and noncurrent) or three sections (current, intermediate and long-term). A current liability is a debt that must be paid within one fiscal year (12 months); an intermediate liability is a debt that is due within one to 10 years; and a long-term liability has a payback term longer than 10 years. In farming, liabilities are commonly associated with different loan types. An example of a current liability would be an operating or production loan. Operating loans are normally used to finance short-term cash flow short falls and/or to cover day-to-day business expenses. Interest on operating loans is typically paid monthly, with no set terms of principal repayment (operating loans should be paid in full annually upon the sale of commodities or liquidation of other current assets). An example of an intermediate liability is a machinery loan. Machinery loans are typically considered intermediate because most farm machinery has an estimated useful life of 10 years or less (the machine may require constant repairs, be less dependable or become technologically obsolete). Intermediate loans should be amortized for fewer years than the useful life of the asset being purchased. These loans can be paid annually or monthly. The payment that is due consists of both interest and part of the principal balance. Lastly, an example of a long-term liability would be a farm real estate loan for purchase of land. Typically, farm land can be amortized over a maximum of 30 years. Long-term farm loans are normally paid on an annual basis; however, loan payments should coincide with income (i.e., a dairy may desire monthly payments, rather than annually). The loan schedule in Appendix 4 displays all three types of loans, and their payments have been calculated based on individual interest rates and the remaining term of the loan. Owner's Equity The difference between a farm’s assets and its liabilities is called owner’s equity. It is sometimes referred to as a farm’s net worth. Depending on the legal entity of the farm (sole proprietorship, partnership, LLC, etc.), owner’s equity can be referred to differently. As a result, each owner’s equity section on the balance sheet will vary. For the purposes of this publication, owner’s equity is simply the farm’s net worth, which can be calculated by taking total assets less total liabilities. Liquidity and Solvency Two important farm financial measures that can be calculated from a balance sheet include liquidity and solvency. Liquidity is the ability to meet the short-term cash needs of a farm. Two common liquidity measures are the current ratio and working capital. The current ratio is current assets divided by current liabilities. In Appendix 1, the current ratio for the beginning of the year was 2.76, meaning that the farm has $2.76 of current assets for every $1 of current liabilities. A current ratio greater than 2.0 is classified as strong (FINPACK, 2016). Working capital is a farm’s current assets less its current liabilities. This is the amount of cash the farm would have if all current assets were converted to cash and all current debts (including principal payments on term debts that are due in 12 months) were paid (excluding contingent liabilities and transaction costs). Solvency is the ability to repay the money loaned if a farm stopped doing business today. There are three commonly used ratios to measure solvency. The first is the debt-to-asset ratio, and it is calculated as total liabilities divided by total assets. This ratio signifies a farm’s debt load compared to its assets. The higher or closer to 1 (if over 1, the business has become insolvent; liabilities exceed assets) a debt-to-asset ratio is, the greater the percentage of farm assets financed by debt (Holland, 1997). In the ratio analysis in Appendix 1, the debt-to-asset ratio at the beginning of the year was 0.229. This means roughly 23 percent of the farm’s assets are financed through debt. The equity-to-asset ratio (total owner’s equity divided by total assets) represents the proportion of total assets that are unencumbered (or debt free). A farm will have a higher equity-to-asset ratio the more it is able to pay its expenses without the use of loans. The last ratio is a measure of how much capital is being supplied by creditors, compared to capital used from farm equity. This is called the debt-to-equity ratio and is calculated by dividing the total liabilities by total owner’s equity. A lower debt-to-equity ratio is more desirable because that means the proportion of capital the farm is supplying through equity is greater than the portion supplied by creditors (debt). Appendix 2 contains a breakdown of each of the ratios and includes the desired outcome of each. When calculating solvency, a consistent value method needs to be used, either cost value or market value but not both. Understanding how to construct and analyze a balance sheet is important for farmers. Farmers should utilize a balance sheet annually to examine and implement changes to improve their operations’ financial position. A farmer can use farm financial analysis to identify financial components of his or her business that could be improved (one cannot manage what they cannot measure). Improved financial performance can provide access to credit, reduced interest rates and open opportunities for expanding the farm. Profitability One of the primary goals of a company is to be profitable. There are many ways a company can use profits. For example, companies can retain profits for future use, they can distribute them to shareholders in the form of dividends, or they can use the profits to pay off debts. However, none of these options actually contributes to the growth of the company. In order to stay profitable, a company must continuously evolve. A fourth option for the use of company profits is to reinvest the profits into the company in order to help it grow. For example, a company can buy new assets such as equipment, buildings, or patents; finance research and development; acquire other companies; or implement a vigorous advertising campaign. There are many options that will help the company to grow and to continue to be profitable. One way to measure how effective a company is at using its invested profits to be profitable is by measuring its return on investment (ROI), which shows the percentage of income generated by profits that were invested in capital assets. It is calculated using the following formula: Capital assets are those tangible and intangible assets that have lives longer than one year; they are also called fixed assets. ROI in its basic form is useful; however, there are really two components of ROI: sales margin and asset turnover. This is known as the DuPont Model. It originated in the 1920s when the DuPont company implemented it for internal measurement purposes. The DuPont model can be expressed using this formula: Sales margin indicates how much profit is generated by each dollar of sales and is computed as shown: Asset turnover indicates the number of sales dollars produced by every dollar invested in capital assets—in other words, how efficiently the company is using its capital assets to generate sales. It is computed as: Using ROI represented as Sales Margin × Asset Turnover, we can get another formula for ROI. Substituting the formulas for each of these individual ratios, ROI can be expressed as: To visualize this ROI formula in another way, we can deconstruct it into its components, as in Figure 12.4. When sales margin and asset turnover are multiplied by each other, the sales components of each measure will cancel out, leaving ROI captures the nuances of both elements. A good sales margin and a proper asset turnover are both needed for a successful operation. As an example, a jewelry store typically has a very low turnover but is profitable because of its high sales margin. A grocery store has a much lower sales margin but is successful because of high turnover. You can see it is important to understand each of these individual components of ROI. Access for free at https://openstax.org/books/principles-managerial-accounting/pages/1-why-it-matters Return on Investment To put these concepts in context, consider a bakery called Scrumptious Sweets, Inc., that has three divisions and evaluates the managers of each of these decisions based on ROI. The following information is available for these divisions: This information can be used to find the sales margin, asset turnover, and ROI for each division: Alternatively, ROI could have been calculated by multiplying Sales Margin × Asset Turnover: ROI measures the return in a percentage form rather than in absolute dollars, which is helpful when comparing projects, divisions, or departments of different sizes. How do we interpret the ROIs for Scrumptious Sweets? Suppose Scrumptious has set a target ROI for each division at 30% in order to share in the bonus pool. In this case, both the donut division and the bagel division would participate in the company bonus pool. What does the analysis regarding the brownie division show? By looking at the breakdown of ROI into its component parts of sales margin and asset turnover, it is apparent that the brownie division has a higher sales margin than the donut division, but it has a lower asset turnover than the other divisions, and this is affecting the brownie division’s ROI. This would provide direction for management of the brownie division to investigate why their asset turnover is significantly lower than the other two divisions. Again, ROI is useful if there is a benchmark against which to compare, but it cannot be judged as a stand-alone measure without that comparison. ROI helps an agribusiness manager to compare two crop options and make the best decision about which to produce for their business. Managers want a high ROI, so they strive to increase it. Closely monitoring costs of an operation can promote a strong ROI. Looking at its components, there are certain decisions managers can make to increase their ROI. For example, the sales margin component can be increased by increasing income, which can be done by either increasing sales revenue or decreasing expenses. Sales revenue can be increased by increasing sales price per unit without losing volume, or by maintaining current sales price but increasing the volume of sales. Asset turnover can be increased by increasing sales revenue or decreasing the amount of capital assets. Capital assets can be decreased by selling off assets such as equipment. For example, suppose the manager of the brownie division has been running a new advertising campaign and is estimating that his sales volume will increase by 5% over the next year due to this ad campaign. This increase in sales volume will lead to an increase in income of $140,000. What does this do to his ROI? Division income will increase from $1,300,000 to $1,440,000, and the division average assets will stay the same, at $4,835,000. This will lead to an ROI of 30%, which is the ROI that must be achieved to participate in the bonus pool. Another factor to consider is the effect of depreciation on ROI. Assets are depreciated over time, and this will reduce the value of the capital assets. A reduction in the capital assets results in an increase in ROI. Looking at the bagel division, suppose the assets in that division depreciated $500,000 from the beginning of the year to the end of the year and that no capital assets were sold and none were purchased. Look at the effect on ROI: Notice that depreciation helped to improve the division’s ROI even though management made no new decisions. Some companies will calculate ROI based on historical cost, while others keep the calculation based on depreciated assets with the idea that the manager is efficiently using the assets as they age. However, if depreciated values are used in the calculation of ROI, as assets are replaced, the ROI will drop from the prior period. One drawback to using ROI is the potential of decreased goal congruence. For example, assume that one of the goals of a corporation is to have ROI of at least 15% (the cost of capital) on all new projects. Suppose one of the divisions within this corporation currently has a ROI of 20%, and the manager is evaluating the production of a new product in his division. If analysis shows that the new project is predicted to have a ROI of 18%, would the manager move forward with the project? Top management would opt to accept the production of the new product. However, since the project would decrease the division’s current ROI, the division manager may reject the project to avoid decreasing his overall performance and possibly his overall compensation. The division manager is making an intentional choice based on his division’s ROI relative to corporate ROI. In other situations, the use of ROI can unintentionally lead to improper decision-making. For example, look at the ROI for the following investment opportunities faced by a manager: In this example, though investment opportunity 1 has a higher ROI, it does not generate any significant income. Therefore, it is important to look at ROI among other factors in order to make an informed decision. Access for free at https://openstax.org/books/principles-managerial-accounting/pages/1-why-it-matters Depreciation Theory Depreciation is the allocation of cost of an asset among the time periods when the asset is used. For example, the cost of a machine that is used to produce products during several production periods should be distributed among those production periods. Depreciation is the concept for allocating that cost. Do not allow "managing depreciation for income tax purposes" to interfere with understanding depreciation for management purposes. These are distinct topics and should be addressed as distinct topics. In managing depreciation for tax purposes, the manager will strive to make decisions, as allowed by federal income tax law, to maximize the business' after-tax income. - In understanding depreciation for management purposes, the manager will strive to develop and follow a depreciation method that results in an accurate statement of costs and net income, without income tax considerations. - Depreciation for purposes of management can be described as a procedure to allocate or assign a portion of the cost of an asset to each production period during which the asset is used. - Deducting a depreciation allowance from the cost of an item does NOT reveal value of the item. However, the value of an item provides insight into depreciation. - Example. A $100,000 depreciable item that has an annual depreciation allowance of $18,000 does not mean the market (resale) value of that item will be $82,000 at the end of the first year. - A $100,000 item that has a resale value of $87,000 after one year cost the owner $13,000 that year; the decrease in the value may be due to use (wear and tear), the fact that the item is one year old, or any other reason why the market value may have declined. - Market value reveals some insight into depreciation but a depreciation allowance has little or no relationship to the item's market value. - An example of calculating depreciation based on a question from a farm manager (who also is a former student). Also see Cost v. Cash Outflow. Depreciation is a procedure to allocate or assign a portion of the cost of an asset to each production period during which the asset is used. Related link: "The Cost of Owning and Operating Farm Machinery -- Utah 1997", pp. 6 and 7 of the pdf file. Profit is defined as "the difference between the revenue generated during a period to time and the costs incurred to generate that revenue during that period of time." Some assets or inputs, however, will be used during more than one production period; an easy example is equipment. Accordingly, a procedure is necessary to allocate an appropriate portion of the cost of the input among the several time periods during which it will be used in the production process. This procedure of allocating cost is generally referred to as calculating depreciation; that is, assigning a portion of the cost of an asset to each production period during which the asset is used. To simplify the procedure, the calculations are often based on time; for example, some methods of depreciation allocate a portion of the cost of the machine to each production period during which the machine will be used. An alternative to allocating cost on the basis of time is to allocate the cost on the basis of use; thus, if the machine is used more heavily during one production period than during another, more of the cost of the machine will be assigned to the period of heavy use than to the period of light use. This alternative should provide the business manager with better information, that is, a more accurate measure of the cost to operate the business, and thus the profit generated by the business during each production period. Land is not considered a depreciable asset; presumably, land will not wear out or become obsolete. However, improvements to land are considered depreciable assets; for example, a well, dam, building, fence, irrigation system, or drainage system will wear out. A depreciable asset is an item that is used in more than one production period but will not last forever. Depreciation is a procedure for allocating the cost of a depreciable asset among the production periods (and enterprises?) in which the asset is used. Depreciation for Income Tax Purposes Perhaps the most frequent application of depreciation is in calculating the business net income (profit?) for purposes of determining the amount of income tax owed by the business or its owners. However, the depreciation allowance for income tax purposes is not likely to reflect the actual use of the machine. Accordingly, it is a common recommendation that businesses maintain two depreciation schedules -- one that complies with income tax law and one that more accurately allocates the cost of the machine over its useful life. This page focuses on the second objective. Depreciation for Management Purposes Perhaps the simplest procedure for calculating depreciation is a straight-line method; that is, assign an equal portion of the cost of the machine to each production period during which it will be used. For example, a machine that cost $75,000 and will be used for 6 production periods, would have a straight-line annual depreciation of $12,500 (75,000/6). This simple approach, however, may not provide the best information for the manager. For example, a new machine may be used more intensely immediately after it is acquired than it may be used in later years. Accordingly, depreciation procedures have been devised that allocate a greater portion of the cost to the first years of the machine's useful life. Another justification for this practice is that the market value drops most significantly during the early years even if the machine is not being heavily used. Likewise, there is an income tax benefit to depreciate the machine "as quickly as possible," but this page does not focus on this last justification. Depreciation Based on Use Another way to consider depreciation for management purposes (as opposed to depreciation for income tax purposes). Rather than measure the machine's useful life in terms of time, how about measuring it in terms of possible production? For example, a tractor may have a projected useful life of 15,000 hours (even if the original buyer does not intend to own it that long; presumably someone else will purchase the used tractor and continue to operate it until it is "fully consumed" at 15,000 hours). If the tractor costs $135,000, the hourly depreciation over its useful life would be $9 per hour (135,000/15,000). Using an hourly rate to calculate depreciate now allows the manager to assign an appropriate portion of the cost of the tractor to each activity. For example, if the tractor is used 1,000 hours one year and 2,500 hours another year, the first year would have to bear $9,000 of the tractor's original cost (1,000 x 9) whereas the second year would have to bear $22,500 of the tractor's original cost (2,500 x 9). Using an hourly rate for depreciation also simplifies the question of allocating cost among enterprise. For example, if the tractor was used 700 hours in the production of wheat one year and 300 hours in the production of bagel (for a total of 1,000 hours as in the previous example), the wheat enterprise would have to bear $6,300 of the tractor's original cost (700 x 9) whereas the soybean enterprise would have to cover $2,700 of the cost (300 x 9). This method assumes a form of straight-line depreciation, but it allows the manager to more accurately assign the cost of the tractor to its actual use. This method could also be applied in terms of acreage; for example, a machine that has an expected life of 16,000 acres and cost $240,000 would have a depreciation expense of $15 per acre of use (240,000/16,000). This depreciation cost per acre can then be used to allocate cost of the machine among enterprises and production periods. Depreciation and Relationship to Cash Flow The concept of depreciation has some unique characteristics relative to other operating cost. The primary difference is that when the cost of depreciation will be accounted for by the business in computing, its profit will not align with when the business has to pay for the machine. For example, purchasing the $135,000 tractor will require that the dealer be paid immediately even though the 15,000 hours of useful life may be spread over 5 to 20 years. Thus, the cash outflow to purchase the tractor does not align with when the depreciation will be recognized and subtracted as a cost. Likewise, if the producer borrowed the $135,000 to purchase the tractor and will repay the debt to the bank over the next 4 years; the cash outflow will most likely not align with when the tractor is being used; that is, the tractor will likely be used for more than 4 years. It is critical that managers understand the distinction between cost (as reported on an income statement) and cash outflow (as reported on a cash flow statement). The two concepts are not the same. Certainly, principal payments on a loan to buy a tractor and depreciation allowance to account for the cost of the tractor is one such example. Also see Cost v. Cash Outflow. Depreciation Schedule (example) - Initial data Information to record might include the item; a unit of measure; the item’s useful life (in units of measure), cost and salvage value (if any); and the calculated depreciation per unit of measure. | | | | | Calculated | X | Acre | 12,000 | $180,000 | $0 | $15/acre | Y | Hour | 20,000 | $240,000 | $20,000 | $11/hour | Z | Acre | 35,000 | $87,500 | $0 | $2.50/acre | - Record of Use Information to record could include item, depreciation per unit of use, enterprise in which the activity occurs, the quantity of use, and the calculated depreciation cost for the activity. Item | Depreciation per unit of measure | Enterprise | Quantity of Use | Calculated Depreciation | X | $15/acre | Wheat | 400 acres | $6,000 | X | $15/acre | Corn | 500 acres | $7,500 | Y | $11/hour | Wheat | 28 hours | $308 | Z | $2.50/acre | Wheat | 400 acres | $1,000 | Unit 5 Lab Exercises Exercise 5a: Selective Breeding and Bioengineering Students explore the principles and techniques of selective breeding and bioengineering in plants. They will learn methods to enhance desirable traits and understand the impact of these practices on agriculture and biodiversity. Exercise 5b: Plant Identification and Uses Students identify various plant species and understand their practical applications. It provides guidelines for recognizing different plants and explores their uses in agriculture, medicine, and other fields. Attributions Title Image https://www.usda.gov/media/blog/2021/12/01/economics-usdas-helping-science Agricultural Production and Prices by the United States Department of Agriculture is in the Public Domain. Fruit and tree nuts lead the growth of horticultural production value chart by the United States Department of Agriculture is in the Public Domain. How to Determine Your Cost of Production by Etaferahu (Eta) Takele Copyright (c) 2022 Regents of the University of California. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. Introduction to Basic Farm Financial Statement: Balance Sheet by Chris Boyer, et al., University of Tennessee. Copyright © University of Tennessee. Used with permission. Introduction to Return on Investment, Residual Income, and Economic Value Added as Evaluative Tools by Mitchell Franklin, Patty Graybeal, Dixon Cooper is licensed CC NC-SA. Access for free at https://openstax.org/books/principles-managerial-accounting/pages/1-why-it-matters Using Kernel Counts to Estimate Corn Yield Potential by Ron Heiniger, North Carolina State University, is Copyright © and used with permission. Depreciation by North Dakota State University is licensed CC NC-SA. Estimating Yield by North Dakota State University is licensed CC NC-SA. Overview of Economic Resources by North Dakota State University is licensed CC NC-SA. This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.
oercommons
2025-03-18T00:35:07.014083
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/92888/overview", "title": "Statewide Dual Credit Introduction to Plant Science, Plant Classification and Use", "author": null }
https://oercommons.org/courseware/lesson/79234/overview
Readings Overview Memory is the process by which the human brain acquires, sorts, stores, retains, and retrieves information received and processed from the external environment. An important aspect of memory is effectively retaining and retrieving information once it is processed and stored. Understanding the concepts and components of memory, how memories are formed and processed, and why consciously and effectively mastering the process of memorization is fundamental. Introduction Memory is the process by which the human brain acquires, sorts, stores, retains, and retrieves information received and processed from the external environment. An important aspect of memory is effectively retaining and retrieving information once it is processed and stored. Understanding the concepts and components of memory, how memories are formed and processed, and why consciously and effectively mastering the process of memorization is fundamental. The ability to retain and recall information learned is imperative in classroom settings. Mastering the concept of memorization aids in academic and social development allows for ease of information recollection, which is imperative for overall brain health and wellness. The historical concept of memory is the notion of experiences we have stored in our brains, and we can recall these experiences if we have not reached maximum brain capacity. However, decades of research in the fields of anatomy, physiology, neurobiology, and psychology have adjusted our notion of memories and how we sort, store, and retrieve them. In this chapter, we explore the basic concepts of memory, how information taken from the environment is processed and converted into useful information, and how converted information is moved into our short–term memory (STM) or long–term memory (LTM) banks. Learning Objectives - Define memory - Understand memory and its processes - Understand the importance of memorization for the college student - Explore the anatomy of the human brain responsible for extracting, processing, and storing information - Understand the physiology of information transfer - Explore and understand the three different forms of memory - Understand how memories are moved from STM to LTM - Explore memory retrieval - Discuss memory retention and ways to successfully store it in LTM - Discuss concentration techniques and how to avoid distractions - Understand and practice strategies for memorization. - Examine the difference and pathways between memory retention and memory loss What is Memory Definition and Explanation The true definition or notion of memory has been debated for decades (Zemach, 1968). One of the most recent and accepted definitions of memory (see Zlotnik & Vansintjan, 2019) is the capacity to store and retrieve information while incorporating biological or chemical processes together, thereby changing both in a permanent way. This may seem like a complicated definition for a process that is part of our daily lives. However, memory is an intuitive process that involves extracting information from the environment through our biological senses and storing the information for later use. A more simplistic definition of memory is the encoding, storage, and retrieval of an experience - or, simply put, a recollection of our past experiences (Madan, 2020). Importance of Memorization for College Students Learning and memory are tightly linked. This is because we must take information from the environment and convert it to a useful form, a process learned over time. Memory is important for college students and their success, specifically how to sharpen information intake for subsequent conversion, storage, and retrieval. Knowledge is central to learning (the two concepts are bound), and the ability to retain and extract memories is a lifetime learning experience (Bailey & Pransky, 2014). References Bailey, F. & Pransky, K. (2014). Why learn about memory. In A. G. Bennett & N. S. Rebello (Eds), Memory at work in the classroom: Strategies to help underachieving students (pp. 6–12). ASCD. Madan, Christopher. 2020. Rethinking the definition of episodic memory. Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale, 74:183–192. Zemach, E. M. (1968). A definition of memory. Mind, 77(308), 526–536. https://www.jstor.org/stable/i339282 Zlotnik, G., & Vansintjan, A. (2019). Memory: An extended definition. Frontiers in Psychology, 10, Article 2523. https://doi.org/10.3389/fpsyg.2019.02523 Memory and Its Processes Memory is the process of maintaining information over time (Matlin, 2005). Acquiring, sorting, and maintaining information involves three main processes or stages. First is our ability to gain information from the environment through sensory input. Second is our ability to preserve this information as memories. Third is our ability to recover these acquired memories. Central to this concept is our ability to learn the meaning of acquired information through personal or learned experiences and through sensory receptors. As information is learned from the environment, it is changed into a form we can understand before it can be stored. After the information has been stored into memories, retrieval becomes essential. For example, a large amount of information must be retrieved during an exam. Next, we discuss the three main processes of memory (see Figure 1): encoding, storage, and retrieval. Figure 1. The three fundamental stages of memory: Encoding, storing, and retrieving information from sensory input. Encoding. Information is taken from the environment through sensory mechanisms. Sensory mechanisms are the part of the nervous system responsible for processing environmental information. Next, this information is converted into an understandable form. This process is called encoding. Think of encoding information as if you were hitting the save button on your computer keyboard. Once this information has been “saved,” or encoded, it can essentially become retrievable at a later time. Encoding information does not happen in an instant. Several processes and pathways are involved. First, information from the environment is received through sensory input and certain structures (areas) within the brain. For example, when you are reading a book, the words you see (through visual input) must be converted into a notion or meaning unique to you. There are several ways in which information becomes encoded (McLeod, 2013) or changed into meaningful information: visual encoding, acoustic encoding, elaborative encoding, and sematic encoding. Storage. Memory storage is the creation of a record of information. After information has been converted into a memory, it becomes stored. There are several variables to memory storage, including the duration of memories, where memories are stored, the kind of memories stored, and the capacity of memory storage. There are two main types of memory storage we discuss in later sections: STM and LTM. However, it is thought that the average adult can store between 5–9 memories at one time (Miller, 1956). This is also known as the 7 (+/- 2) concept of memory storage. Retrieval. Memory retrieval is the process of getting information out of storage and using it in a meaningful way (i.e., information requested for an exam or quiz). There are three stores of memory: sensory memory, STM, and LTM. These concepts are discussed in detail in the section titled forms of memory. Learning how to memorize (or move information into long–term storage) is important. LTM is stored by association. During the sorting and storage phase, information is stored by associating the information with an experience. Did You Know? Repeated bouts of jet lag may cause harm to the temporal lobe, an area of the brain important to memory, causing it to shrink in size and compromising memory. Also, the lack of quality sleep causes a significate brain deterioration and memory loss. | References Matlin, M. W. (2005). Cognition (6th ed.). John Wiley and Sons. McLeod, S. A. (2007). Stages of memory – Encoding, storage, and retrieval. SimplyPsychology. https://www.simplypsychology.org/memory.html Miller, G. 1956. The magical number seven, plus or minus two: some limits on our capacity for processing information. The Psychological Review, 63, 81-97. The Science of Memory Recall that receiving, sorting, and encoding information for storage into memory banks is a process that takes information from the environment through sensory structures and sends the information via neural pathways to the brain for sorting and commitment into memory. In this section, we discuss parts of the brain responsible for memory storage, sorting emotional memories, and motor learning. We also outline the physiological steps of gaining information from the environment as well as how this information is sorted and encoded into our memory banks. Encoding information into memory banks, coupled with successful retrieval, is imperative for academic success. Anatomy of the Human Brain The anatomy of the human brain responsible for acquiring, processing, and storing information received from the external environment involves several components. In this section, we discuss the three main structures of the brain responsible for motor learning and for consolidating, enhancing, and storing memories: hippocampus, amygdala, and cerebellum. Hippocampus. The hippocampus is the region of the brain responsible for regulating motivation, emotion, learning, and consolidating memories from STM to LTM (see Figure 2). The hippocampus is a small (paired structure) mass of densely packed neurons located deep within the temporal lobe of each cerebral cortex. It plays a major role in the formation of new memories via experienced events and declarative memory from facts and knowledge. When an event occurs, information is not automatically stored in LTM. Instead, the information is slowly assimilated and taken into LTM storage banks (Rubin et. al., 2014). Figure 2. Paired Hippocampus. (Credit: "Hippocampus", Life Sciences Database, licensed CC-BY-SA 2.1 Japan) Amygdala. The amygdala is the region of the brain responsible for enhancing the consolidation of emotional memories (see Figure 3). The amygdala is a paired almond–shaped structure located in the medial temporal lobe in front of the hippocampus. Its specific function is the consolidation of memories—that is, the strength at which emotional memories are encountered. The stronger the emotional memory (i.e., traumatic memories), the more enhanced the retention is of that stimulus. Figure 3. Amygdala. (Credit: "Amygdala", Life Sciences Database, licensed CC-BY-SA 2.1 Japan) Cerebellum. The cerebellum is the region of the brain responsible for learning procedural memories and motor learning, such as those that are routine and practiced (see Figure 4). Procedural memories include those such as riding a bike, playing a musical instrument, or driving a car. The cerebellum, also known as the little brain, is a small structure located in the back portion of the skull, just below the temporal and occipital lobes and behind the brainstem. This structure is also invested in motor learning (controlled movement). Individuals who have experienced damage to the hippocampus might retain procedural memories, such as riding a bike or playing the piano, but may not remember specific facts about themselves or their life (Rubin et al., 2014). Likewise, those who have sustained damage to the cerebellum may retain emotional memories but would have trouble remembering how to ride a bike. Figure 4. The Cerebellum. (Credit: "BodyParts3D", Life Sciences Database, licensed CC-BY-SA 2.1 Japan) References Rubin, R. D., Watson, P.D., Duff, M. C., &Cohen, N. J. (2014). The role of the hippocampus in flexible cognition and social behavior. Frontiers in Human Neuroscience, 8, Article 742. https://doi.org/10.3389/fnhum.2014.00742 The Physiology of Memory or Memory Process In this section, we discuss how memories are thought to be processed, stored, and distributed within neural networks of the brain. These processes are also known as encoding information, memory storage, and memory retrieval into conscious awareness. Again, this process is important for academic success because information taken from the classroom must be translated into a usable form (i.e., you must be able to understand it). Next, this information needs to be stored for you to successfully retrieve it. Think of this process as you are taking notes for class. First, do you understand what you are reading or hearing from your instructor? Next, how can you remember this information? Finally, will you be able to retrieve this information and relay it back on a quiz or exam? Memories are processed, stored, and distributed within neural networks located throughout the brain (Mesulam, 1990). To form memories effectively, information received from the environment must occur through a process called encoding. Once information is encoded, it must be stored into memory for later retrieval. The retrieval process is the most difficult process to consider and allows for stored information to move into conscious awareness. Three main processes are involved in this system: encoding information, memory storage, and memory retrieval into conscious awareness. References Amin, H. U., & Malik, A. S. (2014). Memory retention and recall process. In N. Kamel & A. S. Malik (Eds.), EEG/ERP analysis: Methods and applications (pp. 201–237). CRC Press. Bousfield, W. A. 1953. The occurrence of clustering in recall of randomly arranged associates. Journal of General Psychology, 49, 229-240. Mesulam, M. (1990). Large-scale neurocognitive networks and distributed processing for attention, language, and memory. Annals of Neurology, 28, 597-613. Encoding Information Information is received through sensory input from the environment and must be labeled and changed or coded into a form the brain can use and store. This information is organized and stored alongside similar memories that already exist within our memory. We want to make certain important memories are properly encoded. To encode information properly and efficiently, it must be meaningful to us. Recall from the previous section that there are several ways in which information is encoded: sematic, visual, and acoustic. Sematic encoding refers to the ability to encode words by their meaning. An example of sematic encoding is taking a list of words and memorizing them by lumping them into useful meanings. This was demonstrated in 1935 by Bousfield during an experiment in which a group of volunteers was asked to memorize a list of words grouped by meaning. Results from this experiment indicated volunteers could recall words divided into meaningful categories rather than words listed randomly. Consider the list of words below. It would be difficult to memorize them if you had limited time to study and list them: Apple | Grape | Cat | Table | Spinach | Milk | Cup | Candle | Paper | Pear | However, if you simple reorganize the list in a more usable (relatable) way to you, chances of memorizing and recalling them will increase: Apple | Cup | Grape | Paper | Spinach | Candle | Pear | Cat | Milk | Table | This list has now been reorganized to reflect food items we commonly consume separated from items we may find in our homes. Visual encoding is the ability to encode information through mental images we can create when attempting to memorize facts or words. In short, visual encoding is the way we map data into visual structures. It is much easier to memorize a list of words containing animals or familiar objects because they are considered “high imagery word,” and we can create a mental image of the object. Likewise, a list of words consisting of “low imagery” words (e.g., truth or value) are more difficult to encode and memorize because it is not possible to create an image. Have you ever visualized a concept in your mind’s eye? This is what visual encoding is. Let’s take another look at the list of words from above we have reorganized. Again, it is relatively easy to remember a list of items we consume on a regular basis but a bit more difficult to memorize random items and things you might find in your home. To overcome this, simply tell yourself a story about your list of difficult words, and then visualize the story in your head. For example, to memorize the words on the right side of the table below, think of this scenario: “My cat jumped on my table, knocking over my coffee cup, candle, and dumping my papers all over the floor.” You are much more likely to remember this short scenario then attempting to memorize this list: Apple | Cup | Grape | Paper | Spinach | Candle | Pear | Cat | Milk | Table | Acoustic encoding refers to the encoding of sounds and words. An example of acoustic encoding is memorizing a song through rhyme or young children memorizing the alphabet through the familiar song. Semantic encoding is taking new information and applying special (or personal) meaning to it to increase the likelihood of retention. For example, if you are required to memorize the date of a historical event, you may realize a close friend or family member may have a birthday to reflect that date, and those two events are now bound in your memory banks. References Bousfield, W. A. (1953). The occurrence of clustering in recall of randomly arranged associates. Journal of General Psychology, 49, 229-240. Memory Storage and Retrieval Memory Storage Once acquired information has been encoded, it must be retained in sensory memory, STM, or LTM. Memory storage specifically refers to the nature of how memory is stored and how long it remains in storage: sensory, STM, or LTM (Amin & Malik, 2014). On average, most individuals can store between 5–9 items (7 +/- 2) in STM at one time because of limited capacity in this area. This has some very practical implications for incoming college students because if the process of getting information into LTM is not mastered, then you will only be capable of storing, on average, 5–9 pieces of information. The end result of this is the inability to retrieve information on quizzes and exams. Therefore, these memories are lost or must be moved into LTM stores. Likewise, LTMs are stored indefinitely and cannot be lost. Memory Retrieval into Conscious Awareness Memory retrieval into conscious awareness refers to getting information out of storage. LTM is stored and retrieved by association, whereas STM is retrieved sequentially. Did You Know? Have you ever heard the expression “an elephant never forgets”? The origin of this idiom is meaningful, and many of us have heard it many times over our lifetimes. Elephants have a superior hippocampus compared to all other animals, including humans. In fact, the hippocampus of an elephant takes up 0.7% of their brain. The hippocampus of a dolphin in comparison only takes up 0.05% of their brain. | References Amin, H. U., & Malik, A. S. (2014). Memory retention and recall process. In N. Kamel & A. S. Malik (Eds.), EEG/ERP analysis: Methods and applications (pp. 201–237). CRC Press. Stage Model of Memory Memory is an essential function that allows for the acquisition, retention, and recollection of thoughts and events you have experienced. Experiences and their resulting memories are processed over several stages, and these stages represent the length of time memories are available for recollection (Paller & Wagner, 2002). Several models of memory have been proposed by scientists over the past several centuries. However, the most widely accepted model is the stage model of memory. This model was proposed by Atkinson and Shiffrin (1968) and includes three categories of memory: sensory memory, STM, and LTM. These three categories are dependent upon an individual’s personal experience in encountering and storing information for later use (see Figure 5). Figure 5. The Three Main Stages of Memory. (Credit: "How Memory Functions", OpenStax College, licensed CC-BY 4.0 at http://cnx.org/contents/Sr8Ev5Og@5.52:-RwqQWzt@6/How-Memory-Functions) References Atkinson, R. C., & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. In K. W. Spence & J. T. Spence (Eds.), The psychology of learning and motivation (2nd ed., pp. 89–195). Academic Press. Paller, K., & Wagner, A. (2002). Observing the transformation of experiences into memory. Trends in Cognitive Sciences, 6(2), 93–102. https://doi.org/10.1016/S1364-6613(00)01845-3 Forms of Memory There are several forms of memory supported by brain systems, including STM and LTM. These forms of memory are dependent on the individual’s experience in encountering and storing the memory (Paller & Wagner, 2002). Memory can be stored in a variety of ways. However, the three most commonly used ways are sensory memory, short-term memory, and long-term memory. Sensory Memory Sensory memory, the first stage of memory, includes memories processed from the environment through the five senses: sight, sound, taste, touch, and smell. This stage of memory gives the brain time to process the newly gained information. Information obtained through sensory memory is brief and lasts for only the amount of time it takes to process the information, typically less than 1 second. The temporal and occipital lobes of the brain are associated with sensations. Receptors associated with the five senses, or sensory receptors, receive information from the external environment and send it to the brain’s decision-making center. From there, this information is either lost or stored in STM or LTM. This is considered the earliest and briefest form of memory. It has limited storage capacity and can be considered the passageway for information into STM or LTM. Information processing begins in sensory memory, then eventually is moved into STM and sometimes LTM. There are three types of sensory memory, each one associated with a different type of sensory input: iconic memory, echoic memory, and haptic memory. Iconic Memory. Iconic memories are memories associated with visual sensory. An example of iconic memories are visual images that retain mental representation. An example of how iconic memory techniques are useful in the classroom is taking a snapshot of information you are required to memorize and having the ability to retrieve that snapshot. Echoic Memory. Echoic memories are memories associated with auditory sensory receptors. An example of echoic memories are extremely pleasant sounds (e.g., birds singing) or unpleasant sounds (e.g., a bullhorn) you have heard and cannot forget. Haptic Memory. Haptic memories are memories associated with tactile (sense of touch) receptors. Haptic memories are typically unpleasant memories. For example, at a very young age, we all learned stove burners are hot. Most of us, as adults, test them with a quick touch or slap! Short–Term Memory STM, the second stage of memory, is responsible for holding information temporarily—that is, until the information is processed and sorted. STM is associated with very brief neural communications to regions of the prefrontal cortex of the brain. This type of memory is also known as primary or active memory and represents events and sensory data information an individual is currently thinking about or of which they are actively aware. This type of memory encompasses events ranging from about 20 seconds to a couple of days. Although STMs can be quickly forgotten, if these memories are revisited, then they can be transferred to LTM stores. The hippocampus is an essential brain structure responsible for the transformation of STM to LTM storage. An example of how STM works may be a situation in which a close friend introduces you to one of their friends. You may give them a quick greeting but then continue your conversation with your old friend. Chances are you will not remember the name of your “new friend” after a few minutes. Long–Term Memory LTM, the third stage of memory, represents information and knowledge held over an extended period (hours, days, months, or years). In addition, some of the information stored in LTM may be lost eventually, but some memories can stay with you for the duration of your life. LTM is important because information retained in college is carried with you through advanced degrees and even the workforce. This memory type is maintained by stable and permanent changes in neural connections spread throughout the brain. It is these connections that are lost to amylin plaques as we age, which is responsible for dementia and forgetfulness. We will discuss reasons for forgetfulness in the next section. There are two main types of LTM: explicit and implicit. Explicit Memory. Explicit memories are consciously remembered, such as those gained by knowledge or experiences. Explicit memories can come from firsthand experiences you have had—for example, your first bike wreck (episodic memory) or implicit facts you know (how to add two numbers together). Implicit Memory. Implicit memories are memories not readily available for conscious retrieval. For example, you will always remember how to walk or ride a bike, but you may not remember how to explain the process to others. References Paller, K., & Wagner, A. (2002). Observing the transformation of experiences into memory. Trends in Cognitive Sciences, 6(2), 93–102. https://doi.org/10.1016/S1364-6613(00)01845-3 The Movement of Memories From STM to LTM To move information from your working STM to your LTM, you need to make the information meaningful (Passolunghi & Siegel, 2001). Meaningful learning is our goal; making connections between new information and what we already know helps us learn the information deeply, instead of just repeating it back to an instructor in class or on an exam. Information you received from the environment can move through the three stages of memory. This is not always the case though, because most information we receive, whether environmental information gained from sensory input or information gathered from academic experiences, are readily lost if the information is not consciously gathered and stored. The way you pay attention to the information is important. For example, you may enjoy some of your freshman courses and absorb the information given in class. Likewise, some classes may not be of interest to you, and it will become important to find ways to encode that information and move it into LTM. In short, if you consciously pay attention, or are interested in the information, then it will move to the next stage of memory—that is, STM (with the potential to move into LTM). However, if you subconsciously pay attention to memory and are not interested in the information, then it will stop processing at sensory memory and be forgotten. Memory Retrieval Information is passed from sensory memory to STM and held for a short period of time. Only a fraction of those memories, if processed mindfully, are encoded into LTMs. The encoding of this information allows you to assess the information and deem it important enough to hold on to for future retrieval. Memory retrieval refers to the ability to get information out of storage. If we cannot remember something, it is because we could not retrieve it. To improve your ability to store information in LTM and retrieve it, it is necessary to organize the information in a sequential or orderly fashion. In the next section, we discuss ways to organize memories and information for long–term storage and retrieval. Did You Know? There are two ways you process information, whether in class listening to a lecture or holding a conversation with someone: | References Passolunghi, M. C. & Siegel, L.S. (2001). Short-term memory, working memory, and inhibitory control in children with difficulties in arithmetic problem solving. Journal of Experimental Child Psychology, 80, 44-57. Retention, Recall, and Retrieval Recall that LTM represents information and knowledge held for extended periods or even indefinitely. This information is recalled through prompts of recognition from previous experiences or the conscious organization of information into LTM. Memory retention and retrieval are achieved through several avenues. In this section, we discuss the five main ways memories can be retained effectively: R3, attaching special meaning, lumping information, mnemonics, and visual memorization. Proper encoding facilitates memory retention, and the key to college success is the ability to recall or retrieve these retained memories. In this section, we also explore ways we can attach special meaning to information to achieve optimal memory retention and memory retrieval, thereby preventing memory loss (Amin & Malik, 2014). Memory Retention Memory retention is a person’s ability to keep information stored in LTM such that it can be readily retrieved in response to a prompt (Bennett & Rebello, 2012). Your success in college depends on your ability to recall this information when prompted. There are several techniques that will aid you in memory retention and retrieval: repetition, attach meaning, group information into useful categories, use mnemonics, and acrostics. Repetition. Remember to read it, write it, and commit it to memory. Repetition in memory retention is one of the most powerful tools that affects retrieval (Hintzman, 1976) and is the most familiar form of information retention. Repetition is the process of consciously repeating information to oneself or someone else. This form of retention works well because, in the process of repetition, the brain builds new connections between the information being memorized and a previously understood idea (assimilation). An example of effective repetition in memorization is creating and using note cards to study for exams. Attach Meaning. This technique allows you to remember important information by connecting it to something already known. Here is an example: - Remembering the direction of longitude and latitude is easier to do when you realize lines on a globe that run north and south are long, which coincides with LONGitude. - Another way to make a connection is to realize there is an N in LONGitude and an N in north. Latitude lines must run east to west then because there is no N in latitude. Group Information Into Useful Categories. This is also known as the chunking strategy and is a useful technique that allows you to place information into categories that can be more easily memorized. An example of chunking is grouping historical events by era and memorizing each group separately. Use Mnemonics. Another common method of encoding information into LTM is to give meaning to the information by applying some pattern to it. Mnemonics are another technique most of us learn and use at an early age, and they can remain instrumental in rapidly accessing information when prompted. Mnemonics are memory devices that help us recall pieces of information. There are many types of mnemonics, including music, expression, rhymes, acronyms, and acrostics. We will discuss a few of these here. - Acronyms (also known as expression mnemonics) – This is one of the most popular types of mnemonics used in academia. Expression mnemonics are devices created by using the first letters of words to make a new word that will help you remember it. An example of using acronyms when studying for a quiz or exam is to create an expression using keywords from a list or paragraph that must be memorized. - Music mnemonics – Music is second nature for most of us, and we have an impressive ability to remember lyrics to our favorite songs. The same method we use to recall song lyrics can also work to recall other types of information. Just use a song or jingle to your favorite type of music or a specific song to remember a list or series of facts you need to remember. - Rhyming mnemonics are useful in that important information can be put into the form of a poem. A couple of examples are as follows: - ♪ 30 days hath September, April, June, and November. All the rest have 31 Except February my dear son. It has 28 and that is fine But in Leap Year it has 29 ♫ - or ♪ In 1492, Columbus sailed the ocean blue ♫ - ♪ 30 days hath September, April, June, and November. Acrostics. An acrostic is a mnemonic device made by creating a sentence using the first letters of key words in the items to remember. Here are a couple of examples: - The order of operations in math problems (PEMDAS) can be remembered by the acrostic: “Please Excuse My Dear Aunt Sally,” or Parentheses, Exponents, Multiply, Divide, Add, and Subtract. - ROY G. BIV is an acronym for colors of the spectrum (Red, Orange, Yellow, Green, Blue, Indigo, and Violet) References Amin, H. U., & Malik, A. S. (2014). Memory retention and recall process. In N. Kamel & A. S. Malik (Eds.), EEG/ERP analysis: Methods and applications (pp. 201–237). CRC Press. Bennett, A. G., & Rebello, N. S. (2012). Retention and learning. In N. M. Seel (Ed.), Encyclopedia of the sciences of learning (pp. 167–211). Springer. https://doi.org/10.1007/978-1-4419-1428-6_664 Hintzman, D. (1976). Repetition and memory. Psychology of Learning and Motivation, 10, 47–91. https://doi.org/10.1016/S0079-7421(08)60464-8 Retention, Recall, and Retrieval Memory Retrieval Memory retrieval is the ability to get information out of memory storage and back into conscious awareness. Memory retrieval is important not only for everyday functioning but also for college success. There are several forms of memory retrieval: recall, recognition, and relearning. Recall. Recall is the ability to access information without cues. Recall memories represent memories that have been encoded previously and are likely permanently embedded into memory banks. An example of recall learning is the ability to write cursive even after years of not using it. Recognition. Recognition is the ability to access information through identifying with information that has been learned previously. An example of recognition memory retrieval is recognizing a correct answer on a multiple-choice exam. Relearning. Relearning is the process of relearning information you have previously learned. Relearning is typically easier the second time because the information you are relearning is already stored somewhere in your memory banks. Memory Loss Memory loss (failure to retrieve LTMs) occurs in several different forms. It is a person’s inability to remember events permanently or over a period of time, most often due to brain injury (trauma), illness, or the effects of drugs and/or alcohol, lack of sleep, stress/depression, and aging. This section reviews several ways memories can be lost (or not encoded into LTM banks): ineffective coding, amnesia, and blunt trauma. Ineffective coding. Ineffective coding is also considered an encoding failure. Encoding is the process of converting information received through sensory input to a usable, stored form of memory. Encoding failure prevents this information from entering LTM. In essence, it is the failure for that memory to link. Amnesia. Amnesia, or amnestic syndrome, is the inability to recall some memories (e.g., facts, general information, and life experiences) and is often the result of damage to certain regions of the brain, including the hippocampus and temporal lobe. Amnesia affects STM and causes difficulties in retaining new information and past experiences. Blunt Trauma. Blunt trauma to the brain caused by a head injury (e.g., shaken baby, concussion, intoxication) is a common cause of temporary or permanent memory loss. Did You Know? An eidetic memory, what many refer to as a photographic memory, is a person’s ability to recall an object, photograph, or past scene in detail and with great accuracy for an extended period of time (30+ seconds). An autographic memory is a person’s ability to recall past events in great detail, including the exact date they occurred. Fewer than 100 people worldwide possess a higher superior autographic memory (HSAM). People with HSAM are not necessarily superior learns, but they are better at memory retention. | References Chapter Summary Memory, the process in which we acquire, sort, retain, and then retrieve information, is tightly linked to learning. Information taken from the environment is converted into a useful form (i.e., active learning), stored, and then later retrieved. This is a process we use and fine–tune over the course of our lifetime. There are three main processes to memory: encoding (the uptake of information from the environment through sensory organs), storage (the creation of a record of learned information), and retrieval (the process of extracting information from storage and using it in a meaningful way). The structures of the brain instrumental to memory (amygdala, hippocampus, and cerebellum) are responsible for regulating and enhancing emotional memories, storing memories, and regulating procedural memories and motor learning, respectively. There are several forms of memory supported by brain systems. The three most commonly studied and recognized are STM, LTM, and sensory memory. Sensory memory, as the name suggests, is the acquisition of information (that turn into memories) extracted from the environment through sensory mechanisms (i.e., sight, sound, taste, touch, and smell). STM is responsible for holding information temporarily and represents events and sensory data an individual is currently thinking about or of which they are vaguely aware. LTM represents information and knowledge held over an extended period (hours, days, months, or years). Information is stored into one of these memory systems according to the way the information was encountered and processed. However, getting information into LTM banks is especially important because information retained in college is carried and used throughout the course of your lifetime. It is possible—and quite common—to move information from STM into LTM. However, to achieve this, information must be revisited and processed in a meaningful way. Meaningful learning, our ultimate goal, is the ability to make connections between new information and what we already know. This is not always the case though because most information we receive, whether environmental information or information gathered from academic experiences, is readily lost if the information is not consciously gathered and stored. The way you pay attention to the information is important. That is, if we consciously pay attention to information instead of subconsciously paying attention to information, memories are more likely to reach storage in LTM. Memory retrieval is the ability to get information out of memory storage and back into conscious awareness. This concept is important not only for everyday functioning but also important for college success. Failure to retrieve LTMs leads to memory loss and is attributed to several factors, including brain injury, illness, negative effects of drugs and/or alcohol, lack of sleep, stress, and depression. The most critical factor influencing memory failure is time. With that, the most effective way to retain memories is rehearsal: take notes, attach meaning to the information you have noted, and then repeat the information.
oercommons
2025-03-18T00:35:07.114226
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/79234/overview", "title": "Foundations for College Success, Memory", "author": null }
https://oercommons.org/courseware/lesson/79235/overview
Sign in to see your Hubs Sign in to see your Groups Create a standalone learning module, lesson, assignment, assessment or activity Submit OER from the web for review by our librarians Please log in to save materials. Log in Coming soon. or
oercommons
2025-03-18T00:35:07.140634
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/79235/overview", "title": "Foundations for College Success, Memory", "author": null }
https://oercommons.org/courseware/lesson/79236/overview
Sign in to see your Hubs Sign in to see your Groups Create a standalone learning module, lesson, assignment, assessment or activity Submit OER from the web for review by our librarians Please log in to save materials. Log in Coming Soon or
oercommons
2025-03-18T00:35:07.163386
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/79236/overview", "title": "Foundations for College Success, Memory", "author": null }
https://oercommons.org/courseware/lesson/79244/overview
Readings Overview In this chapter we will explore two skills most of us think we’ve already mastered, or at least can do well enough to get by: reading and notetaking. The goal is to make sure you’ve honed these skills well enough to lead you to success in college. Introduction “The mark of a successful college student is the mastery of knowing not only what to study but also how to study it.” – Patricia I. Mulcahy-Ernt & David C. Caverly Figure 7.1. Each of us reads and records information in our own way. (Credits: CollegeDegrees360 / Flickr / CC BY-SA 2.0) Student Survey Think about how you read – your habits or techniques, how academic, informational, and leisure reading differ, and what has and hasn’t worked well in the past. With that in mind, consider the questions below. They will help you determine how this chapter’s content relates to how you tackle academic reading. On a scale of 1 (I need significant improvement) to 4 (I’m doing great), reflect on how you’re doing right now on these statements: - I am reading on a college level. - I take good notes that help me study for exams. - I understand how to manage all the reading I need to do for college. - I recognize the need for different notetaking strategies for different college subjects. As we are introduced to new concepts and practices, it can be enlightening to reflect on how our understanding changes over time. We’ll revisit these questions at the end of the chapter to see if your perspective changes as we move forward. Learning Objectives In this chapter we will explore two skills most of us think we’ve already mastered, or at least can do well enough to get by: reading and notetaking. The goal is to make sure you’ve honed these skills well enough to lead you to success in college. By the end of this chapter, you should be able to do the following: - Discuss the way reading in college differs from your prior reading experiences. - Identify how to adapt to the shift from surface reading to in-depth academic reading. - Demonstrate the usefulness of strong notetaking for your college courses. Being a savvy information consumer is increasingly important because of the amount of information we encounter. Not only do we need to critically evaluate that information, but also with a lens that separates fact from opinion, builds upon prior knowledge, and identifies credible sources. Reading and other literacies help us make sense of the world - from simple reminders to pick up milk to complex treatises on global concerns, we read to comprehend, and in so doing, our brains expand and we are better equipped to participate in scholarly conversations. In college, as we deliberately work to become stronger readers and better note takers, we are working toward ensuring success in our courses and increasing our chances to be successful in the future. Seems like a win-win, doesn’t it? But why? Well, reading improves our vocabulary, critical thinking, ability to make connections between dissimilar parts, and verbal fluency (Cunningham and Stanovich). Research continues to support the premise that one of the most significant learning skills necessary for success in any field is reading. If reading “isn’t your thing” or it’s an area you’ve always struggled in, make that your challenge. Take advantage of the study aids you have available, including human, electronic, and physical resources, to increase your fluency and performance. Your academic journey, personal information seeking, and professional endeavors will all benefit. I challenge you to find a way to make it your new “thing”. Attributions Content on this page is a derivative of “Reading and Notetaking: Introduction” by Amy Baldwin, published by OpenStax, and is licensed CC BY 4.0. Access for free at https://openstax.org/books/college-success/pages/1-introduction. References Cunningham, A.E. & Stanovich, K.E. (Spring/Summer 1998). What Reading Does for the Mind. American Educator, 22(1/2), 5. Mulcahy-Ernt, P.I. & Caverly, D.C. (2009). Strategic Study-Reading in Handbook of College Reading and Study Strategy Research, 177. Types of Reading If you don’t particularly enjoy reading, don’t despair. People read for a variety of reasons – leisure, information, academic, professional, etc. You may just have to step back and reflect on your reading habits, likes, dislikes, and struggles to find ways to overcome your personal obstacles. Consider adjusting your schedule to allow for more reading time, especially in college. Perhaps change how, when, or where you read, explore using an immersive reader app, or combine text with audio books. Every class will expect you to read more than you probably have in the past. Be prepared. We read small items for immediate information, such as notes, billboards, text messages, or directional signs. Online there’s a plethora of quick (and not-so-quick) information about fixing a faucet, sewing a button, or tying a knot. Each encounter is designed to meet a specific goal. They may not be stunning works of art, but they don’t need to be. When we consider why we read or watch more complex items, we can usually categorize it into two categories: 1) reading to introduce new content and 2) reading to understand familiar content with greater depth. Reading to Introduce New Content Imagine your roommate is majoring in a topic you are completely unfamiliar with. You want your semester together to go well but know little about one another. Talking about each other's classes might help. So, you decide to do a little Googling. You don’t need to go in-depth into their area of study – you just need to scratch the surface. Chances are, you have done this sort of exploratory reading before. You may have read reviews of a new restaurant or looked at what people said about a movie or video game before deciding to spend the money. This reading helped you decide. In academic settings, much of what you read in your courses may be relatively new content to you. Or your prior knowledge was fairly general and your coursework leads you to dig deeper through reading. You may find you need to schedule more time for reading and digesting the information. Consider This… Imagining that you were given a chapter to read in your American history class about the Gettysburg Address, write down what you already know about this historic document. How might thinking through this prior knowledge help you better understand the text? | Reading to Better Understand Familiar Content Reading about unfamiliar content is one thing, but what if you already know something about the topic? Do you still need to keep reading about it? Probably. With familiar content, you can do some initial skimming of the text to determine what you already know, and mark what may be new information or a different perspective. You may not have to give your full attention to what you already know, but you will probably spend more time on the new nuggets of information so you can mesh it with what you already know. Is this writer claiming a radical new definition for the topic or an entirely opposite way to consider the subject matter? Are they connecting it to other topics or disciplines in ways you may have not considered? Figure 7.2. A bookstore or library can be a great place to explore. Aside from resources listed on your course syllabi, you may find something that interests you or helps with your course work. When we encounter material in a discipline-specific context and have some familiarity with the topic, we sometimes allow ourselves to become overconfident in our knowledge. Reading an article or two or watching a documentary on a subject does not make someone an expert or scholar on the topic. A scholar thoroughly studies a subject, usually for years, and works to understand all the possible perspectives, potential misunderstandings, and personal biases about the topic. Our goal is for you to one day be an expert or scholar in your field. Attributions Content on this page is a derivative of “Reading and Notetaking: The Nature and Types of Reading” by Amy Baldwin, published by OpenStax, and is licensed CC BY 4.0. Access for free at https://openstax.org/books/college-success/pages/1-introduction. Time for Reading Reading textbooks, scholarly articles, or other in-depth material for class can seem daunting, so a strategic approach is certainly recommended. How much time should you allot to the task? What reading strategy should you use? Early in the semester, pull out your class syllabi and determine the reading requirements and expectations for each class. You will also need to understand your instructors’ expectations about students’ depth of reading. Do you need to read for detail, skim texts to become familiar with the topic, or a mixture of these approaches? Will you need to read prior to the lecture, after lecture, or both? Knowing this will help you decide how to schedule your time, how to tackle the reading assignments, and how to structure your notes. Still not convinced this how you really want to spend your time while in college? It will pay off in the end. Are you apprehensive because you struggle with reading? Remember that reading is just one way of getting information and with today’s technology you can supplement text with audio, video, immersive reader, and translator apps. Find the tools that work for you. So, how do you carve out the time? A couple approaches include determining your usual reading pace, scheduling active reading sessions, and practicing recursive reading. Determining Your Reading Pace. Select a section of text in a textbook or novel. Beginning at the top of a page, mark your starting point and time yourself reading that material for 5 minutes. Note how many pages you read. Multiply the number of pages by 12. This will determine your hourly average reading pace. Of course, your pace can be influenced by many factors – dense material, internal and external distractions, lack of interest or dull content, etc. - but it gives you a good estimate. For illustration purposes, if you were able to read 3 pages in 5 minutes, you should be able to read about 36 similarly formatted pages in one hour. Knowing this, you can determine how much time you need to finish an assigned text (chapter, book, article, etc.). If the novel you’re reading for English class is 350 pages, take the total page count (350) and divide by your hourly reading rate (36 pages per hour). It should take 9 to 10 hours to finish. Now you can schedule time to read for about 45 minutes a day for two weeks and you’ll be done with the novel. Reader | Pages Read in 5 Minutes | Pages per Hour | Approximate Hours to Read 350 Pages | Angel | 2 | 24 | 14 hrs, 35 mins | You | 3 | 36 | 9 hrs, 43 mins | River | 4 | 48 | 7 hrs, 20 mins | Jordan | 5 | 60 | 5 hrs, 50 mins | Scheduling Time for Active Reading. When you set your reading pace, you were reading straight through – not stopping to re-read, look up definitions, or take notes. These are components of active reading, which takes about twice as long as reading through text without stopping. Learning to actively read is an important practice as you work to grasp new or complex concepts. Therefore, we need to schedule time for this type of reading, as well. Consider the reading expectations for each class – depth of reading, complexity of content, number or type of items, etc. Calculate your reading pace for each classes’ reading requirements. The amount of time calculated for active reading may look unachievable – that is why scheduling is so important. Once you spread the task out over time, it is much more achievable. Example Reading Times for Novel and Active Reading | ||||| Reader | Pages Read in 5 Minutes | Pages per Hour | Approximate Hours to Read 350 Pages | Actively Read Pages per Hour | Approximate Hours to Actively Read 350 Pages | Angel | 2 | 24 | 14 hrs, 35 mins | 12 | 29 hrs, 10 mins | You | 3 | 36 | 9 hrs, 43 mins | 18 | 19 hrs, 24 mins | River | 4 | 48 | 7 hrs, 20 mins | 24 | 14 hrs, 35 mins | Jordan | 5 | 60 | 5 hrs, 50 mins | 30 | 11 hrs, 40 mins | Attributions Content on this page is a derivative of “Reading and Notetaking: Effective Reading Strategies” by Amy Baldwin, published by OpenStax, and is licensed CC BY 4.0. Access for free at https://openstax.org/books/college-success/pages/1-introduction. Tackling the Text If you Google ideas or talk to tutors, they may mention the acronym for their favorite active reading strategy - SQ3R, P2R, ISR, and PARR are just a few. Don’t let that intimidate you; the strategies all boil down to one overarching concept – methods for reading to learn and remember. Let’s get started. Preview. Start by previewing or prereading the textbook, chapter, or article you’ve been assigned. You’ll want to take note of how long the text is, the headings or sections and overall organization, any images or graphics and their subtext, and the comprehension or review questions at the end, if there are any. Next, look for an introduction at the beginning of the text and a summary or conclusion at the end. These will provide the most condensed version of the text’s content and key points. Each of these prereading components will help prime your mind for the next steps. Actively Read. Now comes the bulk of the work - actively reading the text by breaking it into chunks, section by section or paragraph by paragraph, and taking notes as you go. Writing your notes in a question-and-answer format may help structure them for easier re-reading later. For instance, rephrase section headings as question statements. What is the author is trying to tell you? Then write your notes as answers to those questions. Apply the tip you learned in grade school: pay attention to the bolded items; they are bold for a reason. If you run across terminology you don’t know, look it up and write the definition using words that make sense to you. Revisit the images and graphics. Make note of their surroundings and how the author uses them to illustrate a point. If you run across something that really doesn’t make sense, no matter how many times you reread it, mark the page with a post-it-note so you can follow up on it later. But don’t forget about it. Ask a classmate or tutor, do some additional research, or ask your professor. Did you notice we didn’t mention highlighting? We’re all guilty of highlighting for the sake of highlighting. Do you really remember what you highlight? Probably not. Do you highlight because it keeps you focused on the text? Instead, consider using your finger, the end of your pen, or a reading guide to track the text as you read. If highlighting really is your go-to-technique, as soon as you highlight something go to your notes and write down what you felt was important. Now that you’ve make it through the text, go back and reread the summary or conclusion. It should make more sense now and will help draw connections between your prior and new knowledge. Take a break. Research shows that spreading learning out over time helps your brain form stronger connections to the material, enabling better recall and application of the new knowledge later and for longer. This is one reason instructors recommend against cramming for tests. That said, now is a good time to walk away from the text for a day or two, shifting gears to read for a different class or work on other assignments. Revisit and review. After a couple days, return to your notes and the text. Review your notes, comparing them to your lecture notes and any other new knowledge you’ve gained since reading the text. If needed, add to your notes to help provide clarity. The final step is to write a summary, using your own words, that combines your notes from the text and your notes from lecture, answering questions you had asked of the author when you initially started the reading. Well before your or my time, Aristotle said, “exercise in repeatedly recalling a thing strengthens the memory.” That’s really our goal in learning, right? To make the learned material stick for the long term. Some of your courses will need you to continually build on your prior learning throughout the semester – and potentially throughout your college career. Set time in your schedule for regular, incremental review of your notes. Over time you should be able to read just the headings in your notes and know the associated details, retrieving them from memory. "How to Read a Textbook – Study Tips – Improve Reading Skills", by Kimberly Hatch Harrison, Socratica, located at https://youtu.be/l0vfLGHoREU Attributions Content on this page is a derivative of “Reading and Notetaking: Effective Reading Strategies” by Amy Baldwin, published by OpenStax, and is licensed CC BY 4.0. Access for free at https://openstax.org/books/college-success/pages/1-introduction. Reading in College Different disciplines or subjects in college may have different expectations, but you can depend on all subjects asking you to read to some degree. You can succeed in meeting college reading requirements by learning to read actively, researching the topic and author, and recognizing how your own preconceived notions or biases affect your reading. As we’ve mentioned in a previous section, reading for college isn’t the same as reading for pleasure or personal interest. Your instructor may ask you to read articles, chapters, books, primary or secondary sources, technical information, and more. They may want you to have a general background on a topic before you dive into a discussion in class, to enrich discussions you’ve already had in small or large groups, or in preparation for an assignment. Part of the challenge is to review each course’s syllabus and pay attention to your instructors’ expectations to appropriately plan your reading time. Consider This… Can you think of a time when you’ve struggled reading college content for a class? Which of the strategies we’ve covered might have helped you with the reading and, subsequently, understanding and retention of the content? Why do you think those strategies would work? | Reading Primary and Secondary Sources. Primary sources are original documents such as letters, speeches, photographs, legal documents, and a variety of other texts and artifacts. When scholars look at these to understand a historical event or scientific challenge and then write about their findings, the scholar’s article is considered a secondary source. Primary sources may contain dated material that we now believe to be inaccurate. It may contain the personal beliefs and biases the original writer didn’t intent to openly publish, and it may even present fanciful or creative ideas that do not support current knowledge. Think of your own personal account of an event you witnessed. Your perspective will influence which details you include, including first impressions, unintentional biases, and misperceptions. Even a when a photographer is capturing an event, what is and isn’t included in the frame, their vantage point and image composition, tells a story about the photographer’s perspective, bias, or intent. Likewise, secondary sources are inevitably another person’s interpretation of the primary source. Readers should remain aware of potential biases the secondary source writer inserts in the writing that may influence the reader. Most scholars work hard to avoid bias in their writing; you as a reader are trusting the writer to present a balanced perspective but must read critically. When possible, read the primary source in conjunction with the secondary source. Seek alternate secondary sources, compare their perspectives, and try to draw your own conclusions. Reading Scholarly Articles. Many scholars of a subject, including your instructors, publish their research in academic or trade journals. Academic, or scholarly, articles report on recent discoveries or original research, theoretical discussions, or the critical review of other published works or other scholars’ research. Often they are peer-reviewed, or referred, by other subject scholars before they are published to ensure the content is supported by research, logical arguments, and solid writing. As a rising scholar, you will conduct your own research in many of your classes and your instructors will likely recommend using academic journal articles as part of your research. Trade journals are like academic journals except that they are written by and for professionals and practitioners in the field and cover industry or trade news, research, trends, legal updates, and other topics of interest to practitioners. Some trade journal articles are peer-reviewed prior to publishing and most can carry as much trustworthiness as a scholarly, peer-reviewed article. Example industries that rely heavily on trade journals are education, nursing, criminal justice and public safety, specific business sectors, construction sciences, and hospitality. Reading Graphics. Authors include graphics in their text for a variety of reasons. In a mathematics textbook, many of the graphics are formulas, illustrations, and sample problems. In the sciences, graphics may be diagrams, processes, charts, or data from experiments. In social sciences, charts may be combined with images, maps, and other graphics to illustrate a concept. Often the graphic has a caption and is referenced in the surrounding paragraphs. In each instance, inclusion of the visual element was intentional. Resist the urge to skim past these – it may be one of the key items that stands out in your memory later. As you review the image, question why the author included it in the text. What message does it reinforce or clarify? What stands out in the graphic? We’ll use the map of the Napoleon’s Battle of the Waterloo Campaign to illustrate the thought process you could follow when “reading” the visuals in your text. Ask yourself these questions: - What is the main point of this map/graphic/image/etc.? - Who is the intended audience? - Is it tied to a person (who), event or thing (what), period (when), or location (where)? - What does the legend (explanation of symbols) include – or not include? - What other information do I need to make sense of this graphic? Figure 7.3. Graphics, charts, graphs, and other visual items often convey important information and may appear on exams or other situations where you’ll need to demonstrate knowledge. (Credit: Wikipedia Commons / Attribution CC0 – Public Domain) Reflection Question... Can you think of times you have struggled reading for a class? What technique did you use? Is there something from what you’ve read so far in this chapter that might have helped you understand the content? Why do you think those strategies would work? | Attributions Content on this page is a derivative of “Reading and Notetaking: Effective Reading Strategies” and “Reading and Notetaking: Taking Notes” by Amy Baldwin, published by OpenStax, and is licensed CC BY 4.0. Access for free at https://openstax.org/books/college-success/pages/1-introduction. Notetaking Notes help you organize ideas and make meaning of information from readings, class lectures, and other information sources. Taking notes helps you stay focused on the topic and task (lecture, reading, etc.). Strong notes will build on your prior knowledge, help you discuss trends or patterns in the information, direct you toward areas where you may need to research further, and is a vital component in active reading, which we mentioned in a previous section. Think of your notes as potential study guides. In the Tackling the Text section we talked about revisiting your notes regularly – this remains true for long term, sustained retention of all new information. Even if you have a photographic memory, notes are not a one-and-done deal – we need to reread, revise, rest, and revisit regularly. Research on this topic concludes that without active engagement after taking notes, most students forget 60–75% of the material within two days. This is called the Ebbinghaus Forgetting Curve, named after 19th-century German psychologist Hermann Ebbinghaus, and with practice you can avoid the Curve by reinforcing what you learned with regular review intervals starting shortly after you’ve taken notes (Fuchs, 1997). Consider This… Do you currently have a preferred way to take notes? When did you start using it? Has it been effective? What other strategy might work for you? | Preparing to Take Notes Why do we take notes? What are your priorities? Special techniques or habits? Are you looking for new, more effective ways to take notes? The notetaking process is personal and unique to you – just like one person’s method of organizing is different than another’s. The trick is figuring out what works best for you. The best notes are ones you take in a methodical manner that makes frequent revision and review easy as you progress through a topic or class. Remember in grade school when the supply list included 3-ring binders and dividers? It was the teachers’ way of teaching us to be organized. For some students it worked - but not for all. Perhaps over the years you’ve discovered graph-paper composition books or a notetaking app works better for you. Maybe you’re still trying to figure it out. That’s okay – just keep trying. Figure 7.4. The best notes are the ones you take in an organized manner. Frequent review and further annotation are important to build a deep and useful understanding of the material. (Credit: English106 / Flickr / Attribution 2.0 Generic (CC-BY 2.0)) There is relatively new research on whether handwritten or typed notes are more effective for retention of material. Mueller and Oppenheimer (2014) agree that handwriting notes and using a computer for notetaking have pros and cons, and most researchers agree that the format is less important than what students do with the notes. Attributions Content on this page is a derivative of “Reading and Notetaking: Taking Notes” by Amy Baldwin, published by OpenStax, and is licensed CC BY 4.0. Access for free at https://openstax.org/books/college-success/pages/1-introduction. References Fuchs, A. H. (1997). Ebbinghaus’s contributions to psychology after 1885. American Journal of Psychology, 110(4), 621–634. Mueller, P. A., & Oppenheimer, D. M. (2014). The Pen Is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking. Psychological Science, 25(6), 1159–1168. Notetaking Systems Whichever notetaking system you choose – computer-based, pen & paper, sketched, note cards, text annotations, and so on - the best one is the one that you will use consistently and accomplishes its goal. The art of notetaking is not automatic for anyone; it takes practice. Unless your instructor expects a specific notetaking style in their class, you are free to use techniques from of different systems to match your style. Just keep yourself – and your notes – organized. At the very least, start notes with an identifier, including the date, course name, topic, and any other information you think will help you when you revisit the notes later. Consider leaving some blank space in your notes so you can add new ideas, questions, or clarifications to the original notes as your knowledge on the topic expands through additional readings, lectures, and explorations. You may have a notetaking style you have used for all your classes. When you were in high school, this one-size-fits-all approach may have worked. Now that you’re in college, reading and studying more advanced topics, your old method may still work but you should have some different strategies in place if you find that your previous method isn’t working. Sometimes different subjects need different notetaking strategies. Cornell Method. One of the most recognizable notetaking systems is the Cornell Method, a method devised by Cornell University education professor Dr. Walter Pauk in the 1940s. In this system, you take a sheet of notebook paper and draw lines to divide the paper into four sections: a two inch horizontal section at the top of the page, two inch section at the bottom of the page, and a vertical line in the center section two inches from the left edge, leaving the biggest area to the right of the vertical line. In the top section include information that provides context for the notes – topic, class, date, the overarching question the notes will answer, etc. Figure 7.5. The Cornell Method provides a structured, organized approach that can be customized. Use the largest section (middle-right of the page) to record the main points of the lecture or reading, preferably in your own words. Abbreviate or use symbols if they make sense to you and use bullet points or phrases instead of complete sentences. After the note-taking session, set the notes aside for a few hours. Then pull out your notes and re-read what you wrote, fill in any details you missed or need to clarify. Then in the narrow section in the center of the page, write key ideas from the adjacent column. In the left column add one- or two-word key ideas or clues that will help you recall the information later. Once you are satisfied with the middle sections, summarize this page of notes in two or three sentences in the section at the bottom of the sheet. Before you move onto something else, cover the large notes column, and quiz yourself over the key ideas. Repeat this step often to reinforce your ability to make the connections between lectures, readings, and assignments. Watch this video from the Learning Strategies Center at Cornell for ideas on how to adapt Cornell Notes to different classes or note-taking purposes. "How to Use Cornell Notes", by Learning Strategies Center at Cornell University, located at https://youtu.be/nX-xshA_0m8 Outlining. You can take notes in a formal outline if you prefer, using traditional outline numbering (Roman numerals, indented capital letters, and Arabic numerals) or a multi-level bulleted list. In both, each indent indicates the transition from a higher-level topic to the related concepts and then to the supporting information. Some people only need keywords to spark their memory, but others will need phrases or complete sentences, especially if the material is complex. The main benefit of an outline is how organized it is - but can be tricky if the lecture or presentation is moving quickly or covering many diverse topics - though it may work well when actively reading. The following outline excerpt illustrates the basic idea: - Dogs (main topic–usually general) - German Shepherd (concept related to main topic) - Protection (supporting info about the concept) - Assertive - Loyal - Weimaraner (concept related to main topic) - Family-friendly (supporting info about the concept) - Active - Healthy - German Shepherd (concept related to main topic) - Cats (main topic) - Siamese Chart or Table. Having difficulty comparing or contrasting main ideas? A chart might help. Divide your paper into columns with headings that include topics or categories you’ll need to remember. Then write notes in the appropriate columns as that information comes to light in the presentation or the reading. This instantly provides an organized set of notes to review later. Example of a Chart to Organize Ideas and Categories | |||| Structure | Types | Functions in Body | Additional Notes | | Carbohydrates | |||| Lipids | |||| Proteins | |||| Nucleic Acid | Concept Mapping and Visual Notetaking. A visual notetaking method is called mapping, mind mapping, or concept mapping, although each of these names can have slightly different uses. Many variations can be found online, but the basic principles are that you are making connections between main ideas through a graphic representation. Some can get elaborate with colors and shapes, but simple is certainly okay – remember, match your style and personal preference. No matter how much artistic flair is in the map, the general concept is for main ideas to be front-and-center with supporting concepts branching out. Figure 7.6. Mind mapping can be an effective, personal approach to organizing information. (Credits: Safety Professionals Chennai, Elementofblank, & http://mindmapping.bg / Wikimedia Commons / CC BY-SA). Feeling exceptionally artistic? Consider drawing representations of concepts instead of using only text or adding color for emphasis. According to educator Sherrill Knezel in her article “The Power of Visual Notetaking,” this strategy is effective because “when students use images and text in notetaking, it gives them two different ways to pull up the information, doubling their chances of recall.” Not artistic? Don’t worry; the images don’t need to be perfect, just lodged in your memory. "Drawing in Class", by Rachel Smith at TEDxUFM, located at https://youtu.be/3tJPeumHNLY Not sure which method to use? Play with different types of notetaking techniques and find the method – or methods – you like best. Once you find what works for you, stick with it. You will become more efficient the more you use it, and your notetaking, review, and recall will become, if not easier, certainly more organized, and memorable. Attributions Content on this page is a derivative of “Reading and Notetaking: Taking Notes” by Amy Baldwin, published by OpenStax, and is licensed CC BY 4.0. Access for free at https://openstax.org/books/college-success/pages/1-introduction. References Knezel, S. (2016, December 28). The Power of Visual Notetaking. Education Week. https://www.edweek.org/education/opinion-the-power-of-visual-notetaking/2016/12 Annotating Your Notes Annotating notes - adding additional details, new insights, or clarifications - after the initial notetaking session will up your study skills game by reinforcing the material in your mind and strengthening your memory. Annotations can refer to anything you do with a text to enhance it for your particular use. The annotations can include highlighting vocabulary terms, writing in definitions for unfamiliar terms, adding questions in the margin, underlining or circling key concepts, drawing images to catch your attention, or otherwise marking a text for future reference. Highlighting is one form of annotation. However, the only reason to highlight is to draw attention to small bits so you can easily pick out that ever-so-important information later. A common mistake we have all made is not knowing when to stop and ending up with a page full of yellow (or whatever color(s) you prefer). If what you need to recall from the passage is a particularly fitting definition of a vocabulary term or concept, highlighting the entire paragraph is less effective than highlighting just the actual term. Your mantra for highlighting text should be less is more. Always read the text first, then go back and highlight what you feel needs special emphasis. Another way to annotate is to underline significant words or passages. Sure, it is not quite as much fun as colorful cousin highlighting, underlining provides precision to your emphasis. Need extra emphasis? Underline twice or draw a box around the information or use different colors. I personally like to draw stars and arrows to draw my eye to text or images I need to remember, research further, or revisit again later. Realistically, you may end up doing each of these annotation styles in the same text at different times. Repeated review is critical to learning, so plan to come back to the same text multiple times, adding annotations each time as your understanding evolves. With experience in reading discipline-specific texts, writing papers, or taking tests, you will know better what to include in your annotations. Figure 7.7. Annotations may include highlighting important concepts, defining terms, writing questions, underlining or circling key terms, or otherwise marking a text for future reference. What you have to remember while you are annotating, especially if you are going to annotate multiple types, is to not overdo whatever method(s) you use. Be neat about it - its organization needs to make sense when you revisit the material later. Attributions Content on this page is a derivative of “Reading and Notetaking: Taking Notes” by Amy Baldwin, published by OpenStax, and is licensed CC BY 4.0. Access for free at https://openstax.org/books/college-success/pages/1-introduction. Developing Your Strategy Marlon was totally organized and ready to take notes in a designated course notebook at the beginning of every philosophy class session. He always dated his page and wrote in the discussion topic. He had various colored highlighters ready to code the different purposes he had defined: vocabulary in pink, confusing concepts in green, and yellow for sections that would need additional explanations. He also used his own shorthand and a variety of symbols for questions (question mark), probable test items (eyes), additional reading suggestions (star), and questions he would ask his instructor before the next class. Doing everything so precisely, Marlon’s methods seemed like a perfect example of how to take notes for success. Inevitably though, by the end of the hour-and-a-half class, Marlon was frantically switching between writing tools unable to maintain the same pace as the instructor. What went wrong? He had a solid plan and was clearly organized and had a plan. But what he was trying to accomplish might have been more successful over time during his reread, review, and revise (or annotate) study sessions. Marlon was suffering from trying to do too much all at once. It’s an honest mistake, but it added to his stress level. Notetaking in class is just the beginning. Your instructor likely gave you an assignment to read or complete before class so you are prepared for the material that will be presented during class. In class you may be occupied by more than passively sitting-and-getting. It is reasonable to anticipate group discussions, working with classmates, or performing some other activity that would take you away from note taking. Does that mean you should ignore taking notes for that day? Most likely not. You may need to summarize the activities from class, make note of points that stand out in your memory, or any questions that come to mind after the activities. Return to Your Notes. Later go back to your notes and add in missing parts. It is best to do this within the first 24-hours after class, if not on the same day. Just as you may generate questions as you read new material, you may leave class with new questions. Write those down in your notes for that class and make it a point to ask the instructor, read more on the topic, do a little research, or combination of all of these. Just as we calculated the amount of time you will need to read the various texts assigned in your classes then setting a schedule, it is just as important to intentionally schedule time to revisit your notes - notes from lectures as well as readings. Write it in your planner, set a reminder on your phone, include it in your plan for the day or week - whatever works best for you. Your notes should enhance how you understand the lessons, readings, lab sessions, and assignments, helping you prepare for not only the next test but for a growing understanding of the subject. The cycle of reading, notetaking in class, reviewing and enhancing your notes, and preparing for tests is part of a continuum you will ideally carry into your professional life. Try not to take short cuts; recognize each step in the cycle as a building block. Learning doesn’t end, which shouldn’t fill you with dread; it should help you recognize that all this work you’re doing in the classroom and during your study and review sessions is ongoing and cumulative. Practicing effective strategies now will help you be a stronger professional. Attributions Content on this page is a derivative of “Reading and Notetaking: Taking Notes” by Amy Baldwin, published by OpenStax, and is licensed CC BY 4.0. Access for free at https://openstax.org/books/college-success/pages/1-introduction. Chapter Summary Reading and notetaking are major components of successful studying and learning. In college the expectation is that you will likely consume considerable amounts of information in each subject through readings, research, lectures, conversation, and more. You may encounter reading situations, such as journal articles and long or technical textbook chapters, that are more difficult to understand than texts you have read previously. As you progress through your college courses, use reading strategies to help you complete the reading assignments and retain the information. Likewise, you will need to take notes that are complete and comprehensive, yet organized, to help you study and recall the information. Learn to be deliberate in your reading and notetaking. Remember the questions we asked at the beginning of this chapter? It is time to revisit them. As you answer them, consider what we’ve discussed in this chapter and reflect on your progress as a reader and notetaker. As a reminder, answer on a scale of 1 (weak) to 4 (strong). - I am reading on a college level. - I take good notes that help me study for exams. - I understand how to manage all the reading I need to do for college. - I recognize the need for different notetaking strategies for different college subjects. Compare your scores to those you recorded at the beginning of the chapter. What has changed? Are there strategies or practices you have been trying as you’ve read through this text or one that you plan on trying this semester? Develop a plan and put it into action. Attributions Content on this page is a derivative of “Reading and Notetaking: Summary” and “Reading and Notetaking: Rethinking” by Amy Baldwin, published by OpenStax, and is licensed CC BY 4.0. Access for free at https://openstax.org/books/college-success/pages/1-introduction.
oercommons
2025-03-18T00:35:07.279843
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/79244/overview", "title": "Foundations for College Success, Reading Strategies", "author": null }
https://oercommons.org/courseware/lesson/79245/overview
Learning Activities Overview Learning Activities for Unit 7 Activity 7.1 Practice the Search Process Try an experiment with a group of classmates. Without looking on the Internet, try to brainstorm a list of 10 topics that you may all be interested in but know very little or nothing at all about. Try to make the topics somewhat obscure rather than ordinary - for example, the possibility of the non-planet Pluto being reclassified again as opposed to something like why we need to drink water. After you have this random list, think of ways you could find information about these weird topics. Our short answer is always: Google. But think of other ways as well. How else could you read about these topics if you don’t know anything about them? You may well be in a similar circumstance in some of your college classes, so listen carefully to your classmates’ ideas on this one. Think beyond standard answers like “I’d go to the library,” and press for what a researcher would do once they are at the library. What types of articles or books would you try to find? One reason that you should not ignore the idea of doing research at the library is because once you are there and looking for information, you have a vast number of other sources readily available to you in a highly organized location. You also can tap into the human resources represented by the research librarians who likely can redirect you if you cannot find appropriate sources. Once you have the resources to answer your questions, what do you do with them? What would be your plan of attack, so to speak? Attributions Content on this page is a derivative of “Reading and Notetaking: Summary” and “Reading and Notetaking: Rethinking” by Amy Baldwin, published by OpenStax, and is licensed CC BY 4.0. Access for free at https://openstax.org/books/college-success/pages/1-introduction. Activity 7.2 Explore Reading & Notetaking Resources What resources can you find about reading and notetaking that will help you develop these crucial skills? How do you go about deciding what resources are valuable for improving your reading and notetaking skills? The selection of study guides and books about notetaking vary dramatically. Ask your instructors or a campus librarian for recommendations. Understand the list below is not comprehensive but will give you a starting point. - College Rules!: How to Study, Survive, and Succeed in College, by Sherri Nist-Olejnik and Jodi Patrick Holschuh. More than just notetaking, this book covers many aspects of transitioning into the rigors of college life and studying. - Effective Notetaking, by Fiona McPherson. This small volume has suggestions for using your limited time wisely before, during, and after notetaking sessions. - How to Study in College, by Walter Pauk. This is the book that introduced Pauk’s notetaking suggestions we now call the Cornell Method. It is a bit dated (from the 1940s), but still contains some valuable information. - Learn to Listen, Listen to Learn 2: Academic Listening and Note-taking, by Roni S. Lebauer. The main point of this book is to help students get the most from college lectures by watching for clues to lecture organization and adapting this information into strong notes. - Study Skills: Do I Really Need this Stuff?, by Steve Piscitelli. Written in a consistently down-to-earth manner, this book will help you with the foundations of strong study skills, including time management, effective notetaking, and seeing the big picture. - “What Reading Does for the Mind,” by Anne Cunningham and Keith Stanovich, 1998, https://www.aft.org/sites/default/files/periodicals/cunningham.pdf - Adler, Mortimer J. and Charles Van Doren. How to Read a Book: The Classic Guide to Intelligent Reading. NY: Simon & Schuster, 1940. - Berns, Gregory S., Kristina Blaine, Michael J. Prietula, and Brandon E. Pye. Brain Connectivity. Dec 2013.ahead of print http://doi.org/10.1089/brain.2013.0166 Attributions Content on this page is a derivative of “Reading and Notetaking: Summary” and “Reading and Notetaking: Rethinking” by Amy Baldwin, published by OpenStax, and is licensed CC BY 4.0. Access for free at https://openstax.org/books/college-success/pages/1-introduction.
oercommons
2025-03-18T00:35:07.306982
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/79245/overview", "title": "Foundations for College Success, Reading Strategies", "author": null }
https://oercommons.org/courseware/lesson/79246/overview
Sign in to see your Hubs Sign in to see your Groups Create a standalone learning module, lesson, assignment, assessment or activity Submit OER from the web for review by our librarians Please log in to save materials. Log in Coming Soon. or
oercommons
2025-03-18T00:35:07.329674
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/79246/overview", "title": "Foundations for College Success, Reading Strategies", "author": null }
https://oercommons.org/courseware/lesson/87810/overview
Pax Mongolica Overview Pax Mongolica The Mongol Empire expanded through brutal raids and invasions, but also established relatively secure routes of trade and technology between East and West. In a number of ways it foreshadowed globalization. Learning Objectives Identify and assess factors in the rise, decline, and disintegration of the Mongol empire(s). Identify and assess the impact of the Mongol empire(s). Keywords / Key Concepts Genghis Khan: founder of the thirteenth-century Mongol empire in Eurasia Kublai Khan: Mongol leader of the Yuan dynasty in China, and grandson of Genghis Khan Pax Mongolica: also known as the Mongol Peace, a system of relationships across Mongol-dominated Asia that allowed trade, technologies, commodities, and ideologies to be disseminated and exchanged across Eurasia Rise of the Mongol Empire During Europe’s High Middle Ages, the Mongol Empire began to emerge and ultimately became the largest contiguous land empire in history. The Mongol Empire began in the Central Asian steppes and lasted throughout the 13th and 14th centuries. At its greatest extent, it included all of modern-day Mongolia, China, parts of Burma, Romania, Pakistan, Siberia, Ukraine, Belarus, Cilicia, Anatolia, Georgia, Armenia, Persia, Iraq, Central Asia, and much or all of Russia. Many additional countries became tributary states of the Mongol Empire. The empire unified the nomadic Mongol and Turkic tribes of historical Mongolia under the leadership of Genghis Khan, who was proclaimed ruler of all Mongols in 1206. The empire grew rapidly under his rule, and then continued to expand under his descendants, through military conquest and invasion. By 1300 the empire controlled much of Asia, including China, and eastern Europe. The vast transcontinental empire connected the east with the west under a Pax Mongolica, or Mongol Peace, through conquest, invasion, and forced displacement of peoples on an unprecedent scale. Although the Pax Mongolica allowed trade, technologies, commodities, and ideologies to be disseminated and exchanged across Eurasia on Mongol terms, the Mongols maintained order through fear and intimidation. Historians regard the Mongol raids and invasions as some of the deadliest and most terrifying conflicts in human history. Ghenghis Khan and the Mongol Empire Before Genghis Khan became the leader of Mongolia, he was known as Temujin. He was born around 1162 in modern-day northern Mongolia, into a nomadic tribe with noble ties and powerful alliances. These fortunate circumstances helped him unite dozens of tribes in his adulthood via alliances. He used diplomacy, political manipulation, and military power to expand his Mongol empire. He also forbade looting of his enemies without permission, and he implemented a policy of sharing spoils with his warriors and their families instead of giving it all to the aristocrats. His meritocratic policies, among other tactics, attracted a broader range of followers, but also alienated his uncles and brothers, who competed with him for control of the empire. War ensued from 1203 through 1205. Temujin prevailed, destroying all the remaining rival tribes and bringing them under his sway. In 1206, Temujin was crowned as the leader of the Great Mongol Nation. It was then that he assumed the title of Genghis Khan, meaning universal leader; this marked the start of the Mongol Empire. Khan maintained control over his empire through a combination of violence, surveillance of his subject peoples, and a relative lenient policy toward religious and local traditions. With his death in 1227, his sons and grandsons continued his empire, although dividing it into four smaller empires, or khanates. Innovations Under Ghenghis Khan As ruler of a vast and diverse empire, Genghis Khan implement a number of innovations. These innovations allowed him to maintain order, and also facilitated trade and exchanges of information across Eurasia. These innovations included reorganization of the army, elimination of tribal loyalties that threatened his control, establishment of his personal Imperial Guard, commission of a new law code, new taxes, administrative reforms, allowance of a greater voice for women and limited religious and cultural freedom for various groups within the empire, and the encouragement of greater literacy in the empire's Mongolian script. This limited tolerance did not constitute freedom for these subject peoples. For example, Jewish kosher traditions and Muslim halal traditions were also cast aside in favor of Mongol dining and social customs. Destruction and Expansion Under Ghenghis Khan Along with his relatively benign policies, Genghis also wreacked havoc and destruction across Asia. Mongol military tactics were based on the swift and ferocious use of mounted cavalry, cannons, and siege warfare, which led to crushing even the strongest European and Islamic forces; these troops left a trail of devastation behind. Cities that resisted the Mongols were subject to destruction, and/or the forced relocation or murder of city residents. For example, after the conquest of the city of Urgench, each Mongol warrior, in an army that might have consisted of 20,000 soldiers, was required to execute 24 people. The dark side of Genghis Khan’s rule also can be seen in the destruction of kingdoms in the Middle East, Egypt, and Poland, along with the replacement of the Song Dynasty by the Yuan Dynasty. Many local populations in what is now India, Pakistan, and Iran considered the great khan to be a blood-thirsty warlord set on destruction. Impact of the Pax Mongolica The Pax Mongolica refers to the relative stabilization of the regions under Mongol control during the height of the empire in the 13th and 14th centuries. The Mongol rulers maintained peace and relative stability in such varied regions because they did not force subjects to adopt religious or cultural traditions. However, they still enforced a legal code known as the Yassa (Great Law), which stopped feudal disagreements at local levels and made outright disobedience a dubious prospect. It also ensured that it was easy to create an army in short time and gave the khans access to the daughters of local leaders. The constant presence of troops across the empire also ensured that people followed Yassa edicts and maintained enough stability for goods and for people to travel long distances along established trade routes. In this environment the largest empire to ever exist helped one of the most influential trade routes in the world, known as the Silk Road, to flourish. This route allowed commodities such as silk, pepper, cinnamon, precious stones, linen, and leather goods to travel between Europe, the Steppe, India, and China. Ideas also traveled along the trade route, including major discoveries and innovations in mathematics, astronomy, papermaking, and banking systems from various parts of the world. Famous explorers, such as Marco Polo, also enjoyed the freedom and stability the Pax Mongolica provided, and were able to bring back valuable information about the East and the Mongol Empire to Europe. End of the Mongol Advance A number of factors brought an end to Mongol expansion into eastern Europe and Asia. In both eastern Europe and Asia Mongol forces were at the limits of their supply system. In addition, differences in the topography and climate of eastern Asia and Europe neutralized the Mongol advantage in mobile warfare. Although one of Genghis' grandsons, Kublai Khan, was able to conquer China and established the short-lived, Mongol-controlled Yuan Dynasty therein, Kublai failed to conquer other east Asian peoples, including the Japanese and the Vietnamese during the last quarter of the thirteenth century. East Europeans also united in their resistance to Mongol expansion, including a number of east European cities and fortresses. One example of such resistance was the Klis fortress in Croatia. In 1242 this east European fortress successfully held out against the Mongols. Decline and Demise of the Mongol Empire By the time of Kublai’s death in 1294, the four separate Mongol empires, or khanates, were each pursuing own separate interests and objectives: the Golden Horde Khanate in the northwest, the Chagatai Khanate in the west, the Ilkhanate in the southwest, and the Yuan Dynasty, based in modern-day Beijing. In 1304, the three western khanates briefly accepted the rule of the Yuan Dynasty in name. This weakness allowed the Chinese Ming Dynasty to take control in 1368, while Russian princes also slowly developed independence over the 14th and 15th centuries. With these developments, the Mongol Empire finally dissolved. Impact of Pax Mongolica Pax Mongolica left numerous legacies, including cultural, political, religious, and technological exchanges across the empire’s trade routes; the spread of pandemics, such as the Black Death; and resurgent nationalism among peoples subjected to Mongol rule, including the Chinese and the Russians. In China opposition to the Mongol Yuan Dynasty under Kublai Khan fed the rise of the Ming Dynasty in 1368. Finally, Mongol control over much of Asia facilitated additional European exploration of Asia and fed growing European interest in it. The most significant manifestation of this interest was Columbus’s 1492 voyage in search of east Asia, inspired in part by Marco Polo and his alleged travels. Attributions Licenses and Attributions CC LICENSED CONTENT, SHARED PREVIOUSLY - Curation and Revision. Provided by: Boundless.com. License: CC BY-SA: Attribution-ShareAlike CC LICENSED CONTENT, SPECIFIC ATTRIBUTION - Title Image - Abraham Cresques, Atlas catalan, Public domain, via Wikimedia Commons. Provided by: Wikipedia. Located at: https://commons.wikimedia.org/wiki/File:Caravane_Marco_Polo.jpg. License: CC BY-SA: Attribution-ShareAlike - Yassa. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Yassa. License: CC BY-SA: Attribution-ShareAlike - Silk Road. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Mongol invasions and conquests. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - HIST302: Medieval Europe. Provided by: Saylor. Located at: https://legacy.saylor.org/hist302/Intro/. License: CC BY: Attribution - Tributary state. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Changes in Eurasia - Mongol Conquest and Aftermath. Provided by: Wikibooks. License: CC BY-SA: Attribution-ShareAlike - Mongol Empire. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Pax Mongolica. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Mongol Empire map. Provided by: Wikipedia. License: Public Domain: No Known Copyright - Extent of the Silk Road. Provided by: Wikipedia. License: Public Domain: No Known Copyright - Marco Polo costume tartare. Provided by: Wikipedia. License: Public Domain: No Known Copyright - Mongol invasions and conquests. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Mongol_conquest. License: CC BY-SA: Attribution-ShareAlike - Mongol invasion of Central Asia. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Mongol conquest of the Kara-Khitai. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Changes in Eurasia - Mongol Conquest and Aftermath. Provided by: Wikibooks. License: CC BY-SA: Attribution-ShareAlike - Mongolian script. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - HIST302: Medieval Europe. Provided by: Saylor. Located at: https://legacy.saylor.org/hist302/Intro/. License: CC BY: Attribution - Mongol Empire. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Mongol Empire. Provided by: Wikipedia. License: Public Domain: No Known Copyright - Yuan Emperor Album Genghis Portrait. Provided by: Wikipedia. License: Public Domain: No Known Copyright - Jin Jar. Provided by: Wikipedia. License: Public Domain: No Known Copyright - Mongol Empire. Provided by: WIkipedia. License: CC BY-SA: Attribution-ShareAlike - Mongol invasion of Central Asia. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Mongol Empire. Provided by: Wikipedia. License: Public Domain: No Known Copyright - Mongol conquest of the Kara-Khitai. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Mongol invasions and conquests. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Yuan Emperor Album Genghis Portrait. Provided by: Wikipedia. License: Public Domain: No Known Copyright - Sung Dynasty 1141. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/File:Sung_Dynasty_1141.png. License: CC BY-SA: Attribution-ShareAlike - Chinese Gunpowder Formula. Provided by: Wikipedia. License: Public Domain: No Known Copyright - Mongol Hunters Song. Provided by: Wikipedia. License: Public Domain: No Known Copyright - Battle of Mohi. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - u00d6gedei Khan. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - HIST201: History of Europe, 1000 to 1800. Provided by: Saylor. Located at: https://legacy.saylor.org/hist201/Intro/. License: CC BY: Attribution - Steppe. Provided by: Wiktionary. Located at: https://en.wiktionary.org/wiki/steppe. License: CC BY-SA: Attribution-ShareAlike - Muscovy. Provided by: Saylor. Located at: https://resources.saylor.org/wwwresources/archived/site/wp-content/uploads/2011/01/Muscovy.pdf. License: CC BY: Attribution - steppe. Provided by: Wiktionary. License: CC BY-SA: Attribution-ShareAlike - Mongol Invasion of Europe. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Coronation Of Ogodei 1229. Provided by: Wikipedia. License: Public Domain: No Known Copyright - Bitwa pod Legnicu0105. Provided by: Wikipedia. License: Public Domain: No Known Copyright - Division of the Mongol Empire. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - HIST201: History of Europe, 1000 to 1800. Provided by: Saylor. Located at: https://legacy.saylor.org/hist201/Intro/. License: CC BY: Attribution - Changes in Eurasia - Mongol Conquest and Aftermath. Provided by: Wikibooks. License: CC BY-SA: Attribution-ShareAlike - Kublai Khan. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Song dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Yuan dynasty. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Yuan_Dynasty. License: CC BY-SA: Attribution-ShareAlike - Trebuchet 2. Provided by: Wikipedia. License: Public Domain: No Known Copyright - Qubilai Setsen Khaan. Provided by: Wikipedia. License: Public Domain: No Known Copyright
oercommons
2025-03-18T00:35:07.358000
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87810/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87811/overview
East Asian Interactions with Europe Overview East Asian Interactions with Europe Maritime voyages eclipsed the prodigious efforts of Europeans to cross Asia by the mid-sixteenth century. Consequently, East Asian Interactions with Europe were dictated by the Pacific Ocean. These interactions took the forms of trade, religious missions, and cultural and technological exchanges, usually from Europeans to east Asians. Learning Objectives Identify and assess the forms, effects, and repercussions of East Asian interactions with Europe. Keywords / Key Concepts Matteo Ricci - Italian Jesuit who worked as a Christian missionary and cultural intermediary in China from 1582 until his death in 1610 Tokugawa shogunate: the last feudal Japanese military government, which existed between 1603 and 1867 Sakoku: period of Japanese isolation dictated by the Tokugawa Shogunate from the early seventeenth to mid-nineteenth centuries During the early modern period, trade between Europeans and east Asians was asymmetrical, with European ships visiting east Asian ports. European traders easily found east Asian goods and commodities that would sell in European and at European-American markets, but they had more difficulty finding goods that interested east Asians. A number of east Asian governments, including those of the Ming and Qing Dynasties of China and the Tokugawa Shogunate of Japan, resorted to restrictions to control that trade. The Dutch, the Portuguese, and the Spanish established trading posts along the coasts of south and southeast Asia as part of their coastal trading empires in early modern eastern and southern Asia. Trade Restrictions In the early Ming, after the devastation of the war that expelled the Mongols, the Hongwu Emperor imposed severe restrictions on trade, which came to be known the “haijin” or “sea ban.” Believing that agriculture was the basis of the economy, Hongwu favored that industry over all else, including the merchant industry. Partly imposed to deal with Japanese piracy amid the mopping up of Yuan partisans, the sea ban was completely counterproductive; by the 16th century, piracy and smuggling were endemic and mostly consisted of Chinese who had been dispossessed by the policies. China’s foreign trade was limited to irregular and expensive tribute missions, and resistance to them among the Chinese bureaucracy led to the scrapping of Zheng He’s fleets. Piracy dropped to negligible levels only upon the ending of the policy in 1567. After Hongwu Emperor’s death, most of his policies were reversed by his successors. By the late Ming, the state was losing power to the very merchants Hongwu had wanted to restrict. Trade Expands After the Chinese banned direct trade with Japan, the Portuguese filled this commercial vacuum as intermediaries between China and Japan. The Portuguese bought Chinese silk and sold it to the Japanese in return for Japanese-mined silver; since silver was more highly valued in China, the Portuguese could then use Japanese silver to buy even larger stocks of Chinese silk. However, by 1573—after the Spanish established a trading base in Manila—the Portuguese intermediary trade was trumped by the prime source of incoming silver to China from the Spanish Americas. Although it is unknown just how much silver flowed from the Philippines to China, it is known that the main port for the Mexican silver trade—Acapulco—shipped between 150,000 and 345,000 kg (4 to 9 million taels) of silver annually from 1597 to 1602. Although the bulk of imports to China were silver, the Chinese also purchased New World crops from the Spanish Empire. This included sweet potatoes, maize, and peanuts. These were foods that could be cultivated in lands where traditional Chinese staple crops—wheat, millet, and rice—couldn’t grow; hence, they facilitated a rise in the population of China. In the Song dynasty (960 – 1279), rice had become the major staple crop of the poor, but after sweet potatoes were introduced to China around 1560, they gradually became the traditional food of the lower classes. The Ming also imported many European firearms in order to ensure the modernness of their weapons. The beginning of relations between the Spanish and Chinese were much warmer than when the Portuguese were first given a reception in China. In the Philippines, the Spanish defeated the fleet of the infamous Chinese pirate Limahong in 1575, an act greatly appreciated by the Ming admiral who had been sent to capture Limahong. In fact, the Chinese admiral invited the Spanish to board his vessel and travel back to China, beginning a trip that included two Spanish soldiers and two Christian friars eager to spread the faith. However, the friars returned to the Philippines after it became apparent that their preaching was unwelcome; Matteo Ricci would fare better in his trip of 1582. The thriving of trade and commerce was aided by the construction of canals, roads, and bridges by the Ming government. The Ming saw the rise of several merchant clans, such as the Huai and Jin, who disposed of large amounts of wealth. The gentry and merchant classes started to fuse, and the merchants gained power at the expense of the state. Some merchants were reputed to have a treasure of 30 million taels. During the last years of the Wanli Emperor’s reign and the reigns of his two successors, an economic crisis developed that was centered around a sudden widespread lack of the empire’s chief medium of exchange: silver. The Protestant powers of the Dutch Republic and the Kingdom of England were staging frequent raids and acts of piracy against the Catholic-based empires of Spain and Portugal in order to weaken their global economic power. Meanwhile, in favor of shipping American-mined silver directly from Spain to Manila, Philip IV of Spain (r. 1621 – 1665) began cracking down on illegal smuggling of silver from Mexico and Peru across the Pacific towards China. In 1639, the new Tokugawa regime of Japan shut down most of its foreign trade with European powers, causing a halt of yet another source of silver coming into China. Collectively, these reductions in the flow of silver into China caused a dramatic spike in the value of silver, which made paying taxes nearly impossible for most provinces in China. People began hoarding precious silver, forcing the ratio of the value of copper to silver into a steep decline. In the 1630s, a string of one thousand copper coins was worth an ounce of silver; by 1640 it was reduced to the value of half an ounce; by 1643 it was worth roughly one-third of an ounce. For peasants this was an economic disaster, since they paid taxes in silver while conducting local trade and selling their crops with copper coins. Isolationism in the Edo Period The isolationist policy of the Tokugawa shogunate, known as Sakoku, tightly controlled Japanese trade and foreign influences for over 200 years, ending with the Perry Expedition that forced Japan to open its market to European imperial powers. Sakoku Sakoku was the foreign relations policy of Japan under which severe restrictions were placed on the entry of foreigners to Japan and Japanese people were forbidden to leave the country without special permission, on penalty of death if they returned. The policy was enacted, through a number of edicts and policies from 1633 to 1639, by the Tokugawa shogunate under Tokugawa Iemitsu—the third shogun of the Tokugawa dynasty. It largely remained officially in effect until 1866, although the arrival of Commodore Matthew Perry in the 1850s began the opening of Japan to Western trade, eroding its enforcement. Historians have argued that the Sakoku policy was established to remove the colonial and religious influence of Spain and Portugal, which was perceived as posing a threat to the stability of the shogunate and to peace in the archipelago. Some scholars, however, have challenged this view as only a partial explanation. Another important factor behind Sakoku was the Tokugawa government’s desire to acquire sufficient control over Japan’s foreign policy, to guarantee peace, and to maintain Tokugawa supremacy over other powerful lords in the country. Japan was not completely isolated under the Sakoku policy, but strict regulations were applied to commerce and foreign relations by the shogunate and certain feudal domains (han). The policy stated that the only European influence permitted was the Dutch factory at Dejima in Nagasaki. Trade with China was also handled at Nagasaki. Trade with Korea was limited to the Tsushima Domain. Trade with the Ainu people was limited to the Matsumae Domain in Hokkaidō, and trade with the Ryūkyū Kingdom took place in Satsuma Domain. Apart from these direct commercial contacts in peripheral provinces, trading countries sent regular missions to the shogun in Edo and Osaka Castle. Due to the necessity for Japanese subjects to travel to and from these trading posts, this trade resembled outgoing trade, with Japanese subjects making regular contact with foreign traders in essentially extraterritorial land. Trade with Chinese and Dutch traders in Nagasaki took place on an island called Dejima, separated from the city by a small strait. Foreigners could not enter Japan from Dejima, nor could Japanese enter Dejima, without special permissions or authority. Jesuits--European Catholic missionaries--came to east Asia to spread Christianity. These missionaries were part of the missionary impulse that the Roman Catholic Church pursued across east Asia from the sixteenth into the twentieth centuries. In these efforts Catholic missionaries were competing with Muslims, Hindus, and, later, Protestants. Cultural and technological exchanges also were one-sided, with Europeans bringing new technology to east Asia, and exposing east Asians to various facets of European cultures. East Asians did not send ships to Europe and pursue this same strategy in reverse. East Asian leaders were most interested in European technology and did not wish to embrace other parts of European culture as part of any technological exchanges. For example, during the seventeenth and eighteenth centuries, European visitors found it much easier to interest the Chinese in European clocks than in Christianity. These early modern interactions between Europeans and east Asia, along with European advances in military and transportation technology, would lay the foundation for the relationships between these two sets of peoples. East Asian efforts to catch up technologically with the West would become one of the key themes in east Asian history. Attributions Licenses and Attributions CC LICENSED CONTENT, SHARED PREVIOUSLY - Curation and Revision. Provided by: Boundless.com. License: CC BY-SA: Attribution-ShareAlike CC LICENSED CONTENT, SPECIFIC ATTRIBUTION - Title Image - Kircher, Athanasius, 1602-1680., CC BY-SA 3.0 <https://creativecommons.org/licenses/by-sa/3.0>, via Wikimedia Commons. Provided by: Wikipedia. Located at: https://commons.wikimedia.org/wiki/File:Ricci_Guangqi_2.jpg. License: CC BY-SA: Attribution-ShareAlike. - Hongwu Emperor. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - History of the Ming dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Ming dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Hongwu1.jpg. Provided by: Wikimedia. Located at: https://commons.wikimedia.org/wiki/File:Hongwu1.jpg. License: Public Domain: No Known Copyright - Economy of the Ming dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Hongwu1.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - Hongwu1.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - 1280px-Chen_Hongshou,_leaf_album_painting.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - Qing conquest of the Ming. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - History of the Ming dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Ming dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Hongwu1.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - 440px-Ch'iu_Ying_001.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - 1024px-Matteo_Ricci_Far_East_1602_Larger.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - History of Japan. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Tokugawa Ieyasu. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Tokugawa_Ieyasu2.JPG. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright
oercommons
2025-03-18T00:35:07.382956
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87811/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87813/overview
Ming and Qing Dynasties Overview Ming and Qing Dynasties The Ming and Qing Dynasties presided over China during the early modern period in human history. Each witnessed economic development and growth, accompanying population growth, technological, financial, and organizational innovations, as well as the arrival of Europeans. These dynasties also witnessed the eclipse of China as a leading power in the world by a number of European powers. Learning Objective Explain the origins, course, accomplishments, decline, and downfall of the Ming and the Qing Dynasties. Key Terms / Key Concepts calligraphy: a visual art related to writing; the design and execution of lettering with a broad-tip brush, among other writing instruments Forbidden City: the Chinese imperial palace from the Ming dynasty to the end of the Qing dynasty—the years 1420 to 1912—in Beijing. Manchu: a Chinese ethnic minority, formerly the Jurchen people, who founded the Qing dynasty Qing dynasty: the last imperial dynasty of China, ruling from 1644 to 1912 with a brief, abortive restoration in 1917 (It was preceded by the Ming dynasty and succeeded by the Republic of China. Its multi-cultural empire lasted almost three centuries and formed the territorial base for the modern Chinese state.) The Ming Dynasty The Ming dynasty (January 23, 1368 – April 25, 1644), officially the Great Ming, was an imperial dynasty of China founded by the peasant rebel leader Zhu Yuanzhang (known posthumously as Emperor Taizu). It succeeded the Yuan dynasty and preceded the short-lived Shun dynasty, which was in turn succeeded by the Qing dynasty. At its height, the Ming dynasty had a population of at least 160 million people, but some assert that the population could actually have been as large as 200 million. Ming rule saw the construction of a vast navy and a standing army of one million troops. Although private maritime trade and official tribute missions from China had taken place in previous dynasties, in the 15th century the size of the tributary fleet under the Muslim eunuch admiral Zheng He surpassed all others in grandeur. There were enormous construction projects, including the restoration of the Grand Canal, the restoration of the Great Wall as it is seen today, and the establishment of the Forbidden City in Beijing during the first quarter of the 15th century. The Ming dynasty is, for many reasons, generally known as a period of stable, effective government. It is seen as the most secure and unchallenged ruling house that China had known up until that time. Its institutions were generally preserved by the following Qing dynasty. Civil service dominated government to an unprecedented degree at this time. During the Ming dynasty, the territory of China expanded (and in some cases also retracted) greatly. For a brief period during the dynasty northern Vietnam was included in Ming territory. Other important developments included the moving of the capital from Nanjing to Beijing. Founding of the Ming Dynasty The Mongol-led Yuan dynasty (1279 – 1368) ruled before the establishment of the Ming dynasty. Alongside institutionalized ethnic discrimination against Han Chinese that stirred resentment and rebellion, other explanations for the Yuan’s demise included overtaxing areas hard-hit by crop failure and inflation, as well as massive flooding of the Yellow River caused by abandonment of irrigation projects. Consequently, agriculture and the economy were in shambles, and rebellion broke out among the hundreds of thousands of peasants called upon to work on repairing the dikes of the Yellow River. A number of Han Chinese groups revolted, including the Red Turbans in 1351. Zhu Yuanzhang was a penniless peasant and Buddhist monk who joined the Red Turbans in 1352, but he soon gained a reputation after marrying the foster daughter of a rebel commander. Zhu was a born into a desperately poor tenant farmer family in Zhongli Village in the Huai River plain, which is in present-day Fengyang, Anhui Province. When he was sixteen, the Huai River broke its banks and flooded the lands where his family lived. Subsequently, a plague killed his entire family, except one of his brothers. He buried them by wrapping them in white clothes. Destitute, Zhu accepted a suggestion to take up a pledge made by his late father and became a novice monk at the Huangjue Temple, a local Buddhist monastery. He did not remain there for long, as the monastery ran short of funds, forcing him to leave. For the next few years, Zhu led the life of a wandering beggar and personally experienced and saw the hardships of the common people. After about three years, he returned to the monastery and stayed there until he was around twenty-four years old. He learned to read and write during the time he spent with the Buddhist monks. The monastery where Zhu lived was eventually destroyed by an army that was suppressing a local rebellion. In 1352, Zhu joined one of the many insurgent forces that had risen in rebellion against the Mongol-led Yuan dynasty. He rose rapidly through the ranks and became a commander. His rebel force later joined the Red Turbans, a millenarian sect related to the White Lotus Society, and one that followed cultural and religious traditions of Buddhism, Zoroastrianism, and other religions. Widely seen as a defender of Confucianism and neo-Confucianism among the predominantly Han Chinese population in China, Zhu emerged as a leader of the rebels that were struggling to overthrow the Yuan dynasty. In 1356 Zhu’s rebel force captured the city of Nanjing, which he would later establish as the capital of the Ming dynasty. Zhu enlisted the aid of many able advisors, including the artillery specialists Jiao Yu and Liu Bowen. In the Battle of Lake Poyang in 1363, Zhu cemented his power in the south by eliminating his archrival, rebel leader Chen Youliang. This battle was—in terms of personnel—one of the largest naval battles in history. After the dynastic head of the Red Turbans suspiciously died in 1367 while a guest of Zhu, Zhu made his imperial ambitions known by sending an army toward the Yuan capital in 1368. The last Yuan emperor fled north into Mongolia and Zhu declared the founding of the Ming dynasty after razing the Yuan palaces in Dadu (present-day Beijing) to the ground. Instead of following the traditional way of naming a dynasty after the first ruler’s home district, Zhu Yuanzhang’s choice of “Ming,” or “Brilliant,” for his dynasty followed a Mongol precedent of choosing an uplifting title. Zhu Yuanzhang also took “Hongwu,” or “Vastly Martial,” as his reign title. Although the White Lotus had instigated his rise to power, the emperor later denied that he had ever been a member of the organization, and he suppressed the religious movement after he became emperor. Zhu Yuanzhang drew on both past institutions and new approaches in order to create jiaohua (civilization) as an organic Chinese governing process. This included building schools at all levels and increasing study of the classics, as well as books on morality. There was also a distribution of Neo-Confucian ritual manuals and a new civil service examination system for recruitment into the bureaucracy. The Ming Dynasty is regarded as one of China’s three golden ages (the other two being the Han and Song periods). The period was marked by the increasing political influence of the merchants, the gradual weakening of imperial rule, and technological advances. The Economy under the Ming Dynasty The economy of the Ming dynasty (1368 – 1644) of China was the largest in the world during that period. It was characterized by extreme inflation, the return to silver bullion, and the rise of large agricultural markets. Currency during the Ming Dynasty The early Ming dynasty attempted to use paper currency, with outflows of bullion limited by its ban on private foreign commerce. Like its forebearers, paper currency experienced massive counterfeiting and hyperinflation. In 1425, Ming notes were trading at about 0.014% of their original value under the Hongwu Emperor. The notes remained in circulation as late as 1573, but their printing ceased in 1450. Minor coins were minted in base metals, but trade mostly occurred using silver ingots. As their purity and exact weight varied, they were treated as bullion and measured in tael. These privately made “sycee” first came into use in Guangdong. Spreading to the lower Yangtze sometime before 1423, sycee became acceptable for payment of tax obligations. In the mid-15th century, interruptions in the circulation of silver resulted in contractions in the money supply that led to an extensive reversion to barter. The silver shortage was solved in part through smuggling, then by way of the legal importation of Japanese silver, mostly through the Portuguese and Dutch, and Spanish silver from Potosí carried on the Manila galleons. In succession, China required silver for the payment of provincial taxes in 1465, the salt tax in 1475, and corvée exemptions in 1485. By the late Ming, the amount of silver being used was extraordinary. At a time when English traders considered tens of thousands of pounds an exceptional fortune, the Zheng clan of merchants regularly engaged in transactions valued at millions of taels. However, a second silver contraction occurred in the mid-17th century when King Philip IV of Spain began enforcing laws limiting direct trade between Spanish South America and China at about the same time the new Tokugawa shogunate in Japan restricted most of its foreign exports, cutting off Dutch and Portuguese access to its silver. The dramatic spike in silver’s value in China made payment of taxes nearly impossible for most provinces. The government even resumed use of paper currency amid Li Zicheng’s rebellion. Agriculture during the Ming Dynasty In order to recover from the rule of the Mongols and the wars that followed, the Hongwu Emperor enacted pro-agricultural policies. The state invested extensively in agricultural canals and reduced taxes on agriculture to 3.3% of the output, and later to 1.5%. Ming farmers also introduced many innovations and new methods, such as water-powered plows and crop rotation. This led to a massive agricultural surplus that became the basis of a market economy. The Ming saw the rise of commercial plantations suitable to their regions. Tea, fruit, paint, and other goods were produced on a massive scale by these agricultural plantations. Regional patterns of production established during this period continued into the Qing dynasty. The Columbian exchange brought crops such as corn. Still, large numbers of peasants abandoned the land to become artisans. The population of the Ming boomed; estimates for the population of the Ming range from 160 to 200 million. Agriculture during the Ming changed significantly. Firstly, gigantic areas devoted to cash crops sprung up, and there was demand for the crops in the new market economy. Secondly, agricultural tools and carts, some water powered, help to create a large agricultural surplus that formed the basis of the rural economy. Besides rice, other crops were grown on a large scale. Although images of autarkic farmers who had no connection to the rest of China may have some merit for the earlier Han and Tang dynasties, this was certainly not the case for the Ming dynasty. During the Ming dynasty, the increase in population and the decrease in quality land made it necessary for farmers to make a living off cash crops. Markets for these crops appeared in the rural countryside, where goods were exchanged and bartered. A second type of market that developed in China was the urban-rural type, in which rural goods were sold to urban dwellers. This was common when landlords decided to reside in the cities and use income from rural land holdings to facilitate exchange in those urban areas. Professional merchants used this type of market to buy rural goods in large quantities. The third type of market was the “national market,” which was developed during the Song dynasty but particularly enhanced during the Ming. This market involved not only the exchanges described above but also products produced directly for the market. Unlike earlier dynasties, many Ming peasants were no longer generating only products they needed; many of them produced goods for the market, which they then sold at a profit. Land Reform Because the Hongwu Emperor came from a peasant family, he was aware of how peasants used to suffer under the oppression of the scholar-bureaucrats and the wealthy. Many of the latter, relying on their connections with government officials, encroached unscrupulously on peasants’ lands and bribed the officials to transfer the burden of taxation to the poor. To prevent such abuse, the Hongwu Emperor instituted two systems: Yellow Records and Fish Scale Records. These systems served both to secure the government’s income from land taxes and to affirm that peasants would not lose their lands. However, the reforms did not eliminate the threat of the bureaucrats for the peasants. Instead, the expansion of the bureaucrats and their growing prestige translated into more wealth and tax exemption for those in government service. The bureaucrats gained new privileges, and some became illegal moneylenders and managers of gambling rings. Using their power, the bureaucrats expanded their estates at the expense of peasants’ land through outright purchase of those lands and foreclosure on their mortgages whenever they wanted the lands. The peasants often became either tenants or workers, and some sought employment elsewhere. Since the beginning of the Ming dynasty in 1357, great care was taken by the Hongwu Emperor to distribute land to peasants. One way was through forced migration to less dense areas; some people were tied to a pagoda tree in Hongdong and moved. Public works projects, such as the construction of irrigation systems and dikes, were undertaken in an attempt to help farmers. In addition, the Hongwu Emperor also reduced the demands on the peasantry for forced labour. In 1370, the Hongwu Emperor ordered that some lands in Hunan and Anhui should be given to young farmers who had reached adulthood. The order was intended to prevent landlords from seizing the land, as it also decreed that the titles to the lands were not transferable. During the middle part of his reign, the Hongwu Emperor passed an edict stating that those who brought fallow land under cultivation could keep it as their property without being taxed. Art under the Ming Dynasty Literature, poetry, and painting flourished during the Ming dynasty, especially in the economically prosperous lower Yangtze valley. Literature and Poetry Short fiction had been popular in China as far back as the Tang dynasty (618 – 907), and the works of contemporaneous Ming authors such as Xu Guangqi, Xu Xiake, and Song Yingxing were often technical and encyclopedic. But the most striking literary development during the Ming period was the vernacular novel. While the gentry elite were educated enough to fully comprehend the language of classical Chinese, those with rudimentary educations—such as women in educated families, merchants, and shop clerks—became a large potential audience for literature and performing arts that employed vernacular Chinese. Literati scholars edited or developed major Chinese novels into mature form in this period, such as Water Margin and Journey to the West. Jin Ping Mei, published in 1610, though it incorporated earlier material, exemplifies the trend toward independent composition and concern with psychology. In the later years of the dynasty, Feng Menglong and Ling Mengchu innovated with vernacular short fiction. Theater scripts were equally imaginative. The most famous script, The Peony Pavilion, was written by Tang Xianzu (1550 – 1616) and had its first performance at the Pavilion of Prince Teng in 1598. Informal essay and travel writing was another highlight of Ming literature. Xu Xiake (1587–1641), a travel literature author, published his Travel Diaries in 404,000 written characters, with information on everything from local geography to mineralogy. In contrast to Xu Xiake, who focused on technical aspects in his travel literature, the Chinese poet and official Yuan Hongdao (1568 – 1610) used travel literature to express his desires for individualism, as well as autonomy from and frustration with Confucian court politics. Yuan desired to free himself from the ethical compromises that were inseparable from the career of a scholar-official. This anti-official sentiment in Yuan’s travel literature and poetry was actually following in the tradition of the Song dynasty poet and official Su Shi (1037 – 1101). Yuan Hongdao and his two brothers, Yuan Zongdao (1560 – 1600) and Yuan Zhongdao (1570 – 1623), were the founders of the Gong’an School of letters. This highly individualistic school of poetry and prose was criticized by the Confucian establishment for its association with intense sensual lyricism, which was also apparent in Ming vernacular novels like the Jin Ping Mei. Yet even the gentry and scholar-officials were affected by the new popular romantic literature, seeking courtesans as soulmates to reenact the heroic love stories that arranged marriages often could not provide or accommodate. The first reference to the publishing of private newspapers in Beijing was in 1582; by 1638 the Beijing Gazette switched from using woodblock print to movable type printing. The new literary field of the moral guide to business ethics was developed during the late Ming period for the readership of the merchant class. Painting Famous painters included Ni Zan and Dong Qichang, as well as the Four Masters of the Ming dynasty: Shen Zhou, Tang Yin, Wen Zhengming, and Qiu Ying. They drew upon the techniques, styles, and complexity in painting achieved by their Song and Yuan predecessors, but they added new techniques and styles. Well-known Ming artists could make a living simply by painting due to the high prices they charged for their artworks and the great demand by the highly cultured community, who could afford to collect precious works of art. The artist Qiu Ying was once paid 100 oz of silver to paint a long hand-scroll for the eightieth birthday celebration of a wealthy patron’s mother. Renowned artists often gathered an entourage of followers, some who were amateurs who painted while pursuing an official career, and others who were full-time painters. The painting techniques that were invented and developed before the Ming period became classical during it. More colors were used in painting during the Ming dynasty; seal brown became much more widely used, and even over-used. Many new painting skills and techniques were innovated and developed; calligraphy was much more closely and perfectly combined with the art of painting. Chinese painting reached another climax in the mid- and late-Ming, when painting was derived in a broad scale, many new schools were born, and many outstanding masters emerged. Pottery The Ming period was also renowned for ceramics and porcelains. The major production centers for porcelain were the imperial kilns at Jingdezhen in Jiangxi province and Dehua in Fujian province. By the 16th century, the Dehua porcelain factories catered to European tastes by creating Chinese export porcelain. Individual potters also became known, such as He Chaozong who became famous in the early 17th century for his style of white porcelain sculpture. The ceramic trade thrived in Asia; Chuimei Ho estimates that about 16% of late Ming era Chinese ceramic exports were sent to Europe, while the rest were destined for Japan and South East Asia. Carved designs in lacquerware and designs glazed onto porcelain wares displayed intricate scenes similar in complexity to those in painting. These items could be found in the homes of the wealthy, alongside embroidered silks and wares in jade, ivory, and cloisonné. The houses of the rich were also furnished with rosewood furniture and feathery latticework. The writing materials in a scholar’s private study, including elaborately carved brush holders made of stone or wood that were designed and arranged ritually to give an aesthetic appeal. Connoisseurship in the late Ming period centered on these items of refined artistic taste, which provided work for art dealers and even underground scammers who themselves made imitations and false attributions. The Jesuit Matteo Ricci, while staying in Nanjing, wrote that Chinese scam artists were ingenious at making forgeries and thus huge profits. However, there were guides to help the wary new connoisseurs; Liu Tong (d. 1637) wrote a book printed in 1635 that told his readers how to spot fake and authentic pieces of art. He revealed that a Xuande-era (1426 – 1435) bronze work could be authenticated by judging its sheen; porcelain wares from the Yongle era (1402 – 1424) could be judged authentic by their thickness. Fall of the Ming Dynasty The fall of the Ming dynasty was caused by a combination of factors, including an economic disaster due to lack of silver, a series of natural disasters, peasant uprisings, and finally attacks by the Manchu people. Economic Breakdown During the last years of the Wanli Emperor’s reign and the reigns of his two successors, an economic crisis developed that was centered around a sudden widespread lack of the empire’s chief medium of exchange: silver. The Protestant powers of the Dutch Republic and the Kingdom of England were staging frequent raids and acts of piracy against the Catholic-based empires of Spain and Portugal in order to weaken their global economic power. Meanwhile, Philip IV of Spain (r. 1621 – 1665) began cracking down on illegal smuggling of silver from Mexico and Peru across the Pacific towards China, in favor of shipping American-mined silver directly from Spain to Manila. In 1639, the new Tokugawa regime of Japan shut down most of its foreign trade with European powers, causing a halt of yet another source of silver coming into China. However, while Japanese silver still came into China in limited amounts, the greatest stunt to the flow of silver came from the Americas. These events occurring at roughly the same time caused a dramatic spike in the value of silver and made paying taxes nearly impossible for most provinces. People began hoarding precious silver, forcing the ratio of the value of copper to silver into a steep decline. In the 1630s, a string of one thousand copper coins was worth an ounce of silver; by 1640 it was reduced to the value of half an ounce; by 1643 it was worth roughly one-third of an ounce. For peasants this was an economic disaster, since they paid taxes in silver while conducting local trade and selling their crops with copper coins. Natural Disasters In this early half of the 17th century, famines became common in northern China because of unusual dry and cold weather that shortened the growing season; these were effects of a larger ecological event now known as the Little Ice Age. Loss of life and normal civility was caused by widespread famine, tax increases, massive military desertions, a declining relief system, natural disasters such as flooding, and the inability of the government to properly manage irrigation and flood-control projects. The central government was starved of resources and could do very little to mitigate the effects of these calamities. Making matters worse, a widespread epidemic spread across China from Zhejiang to Henan, killing a large but unknown number of people. The famine and drought in the late 1620s and the 1630s contributed to the rebellions that broke out in Shaanxi led by rebel leaders such as Li Zicheng and Zhang Xianzhong. The Qing Conquest of Ming: Rebellion, Invasion, Collapse The Qing conquest of the Ming was a period of conflict between the Qing dynasty, established by the Manchu clan Aisin Gioro in Manchuria (contemporary Northeastern China), and the ruling Ming dynasty of China. The Manchu, formerly called the Jurchen people, had risen to power under the leadership of a tribal leader named Nurhaci. In 1618, leading up to the Qing conquest, Nurhaci commissioned a document titled the Seven Grievances, which enumerated resentments against the Ming and encouraged rebellion against their domination. Many of the grievances dealt with conflicts against Yehe, which was a major Manchu clan, and Ming favoritism of Yehe. Nurhaci’s demand that the Ming pay tribute to him to redress the Seven Grievances was effectively a declaration of war, as the Ming were not willing to pay money to a former tributary. Shortly afterwards, Nurhaci began to force the Ming out of Liaoning in southern Manchuria. In 1640, masses of Chinese peasants—who were starving, unable to pay their taxes, and no longer in fear of the frequently defeated Chinese army—began to form into huge bands of rebels. The Chinese military, caught between fruitless efforts to defeat the Manchu raiders from the north and huge peasant revolts in the provinces, essentially fell apart. On April 24, 1644, Beijing fell to a rebel army led by Li Zicheng, a former minor Ming official who became the leader of the peasant revolt and then proclaimed the Shun dynasty. The last Ming emperor, the Chongzhen Emperor, hanged himself on a tree in the imperial garden outside the Forbidden City. When Li Zicheng moved against him, the Ming general Wu Sangui shifted his alliance to the Manchus. Li Zicheng was defeated at the Battle of Shanhai Pass by the joint forces of Wu Sangui and the Manchu Prince Dorgon. On June 6, the Manchus and Wu entered the capital and proclaimed the young Shunzhi Emperor as Emperor of China. The Kangxi Emperor ascended the throne in 1661, and in 1662 his regents launched the Great Clearance to defeat the resistance of Ming loyalists in South China. In 1662, Zheng Chenggong founded the Kingdom of Tungning in Taiwan, a pro-Ming dynasty state with a goal of reconquering China. However, the Kingdom of Tungning was defeated in the Battle of Penghu by Han Chinese admiral Shi Lang, who had also served under the Ming. He fought off several rebellions, such as the Revolt of the Three Feudatories in southern China, which was led by Wu Sangui starting in 1673. Then he countered by launching a series of campaigns that expanded his empire. The fall of the Ming dynasty was caused by a combination of factors. Kenneth Swope argues that one key factor was deteriorating relations between Ming royalty and the Ming empire’s military leadership. Other factors include repeated military expeditions to the North, inflationary pressures caused by spending too much from the imperial treasury, natural disasters, and epidemics of disease. Contributing further to the chaos was the peasant rebellion in Beijing in 1644 and a series of weak emperors. Ming power would hold out in what is now southern China for years, but would eventually be overtaken by the Manchus. The Qing Dynasty At the peak of the Qing dynasty (1644 – 1912), China ruled more than one-third of the world’s population, had the largest economy in the world, and by area was one of the largest empires ever. Rise to Power The Qing dynasty (1644 – 1911) was the last imperial dynasty in China. It was founded not by Han Chinese, who constitute the majority of the Chinese population, but by a sedentary farming people known as the Jurchen. What would become the Manchu state in the early 17th century was founded in Jianzhou (Manchuria) by Nurhaci—the chieftain of a minor Jurchen tribe known as Aisin Gioro. Originally a vassal of the Ming emperors, Nurhachi embarked on an intertribal feud in 1582 that escalated into a campaign to unify the nearby tribes. By 1616, he sufficiently consolidated Jianzhou enough to be able to proclaim himself Khan of the Great Jin, in reference to the previous Jurchen dynasty. In 1618, Nurhachi announced the Seven Grievances, a document that enumerated grievances against the Ming, and began to rebel against the Ming domination. Nurhaci’s demand that the Ming pay tribute to him to redress the grievances was effectively a declaration of war, as the Ming were not willing to pay a former tributary. Shortly after, Nurhaci began to invade the Ming in Liaoning in southern Manchuria. After a series of successful battles, he relocated his capital from Hetu Ala to successively bigger captured Ming cities in Liaodong Peninsula: first Liaoyang in 1621, then Shenyang (Mukden) in 1625. Relocating his court to Liaodong brought Nurhachi in close contact with the Khorchin Mongol domains on the plains of Mongolia. Nurhachi’s policy towards the Khorchins was to seek their friendship and cooperation against the Ming, securing his western border from a powerful potential enemy. Further, the Khorchin proved a useful ally in the war, lending the Jurchens their expertise as cavalry archers. To guarantee this new alliance, Nurhachi initiated a policy of inter-marriages between the Jurchen and Khorchin nobilities. This is a typical example of Nurhachi initiatives that eventually became official Qing government policy. During most of the Qing period, the Mongols gave military assistance to the Manchus. Two of Nurhaci’s critical contributions were ordering the creation of a written Manchu script based on the Mongolian, after the earlier Jurchen script was forgotten, and the creation of the civil and military administrative system, which eventually evolved into the Eight Banners—the defining element of Manchu identity. The Eight Banners were administrative/military divisions under the Qing dynasty into which all Manchu households were placed. In war, the Eight Banners functioned as armies, but the banner system was also the basic organizational framework of Manchu society. The banner armies played an instrumental role in his unification of the fragmented Jurchen people and in the Qing dynasty’s conquest of the Ming dynasty. In 1635, Nurchaci’s son and successor Huangtaiji changed the name of the Jurchen ethnic group to the Manchu. At the same time, the Ming dynasty was fighting for its survival. Ming government officials fought against each other, against fiscal collapse, and against a series of peasant rebellions. In 1640, masses of Chinese peasants who were starving, unable to pay their taxes, and no longer in fear of the frequently defeated Chinese army began to form huge bands of rebels. The Chinese military, caught between fruitless efforts to defeat the Manchu raiders from the north and huge peasant revolts in the provinces, essentially fell apart. Unpaid and unfed, the army was defeated by Li Zicheng—who became the self-styled as the Prince of Shun. In 1644, Beijing fell to a rebel army led by Li Zicheng when the city gates were opened from within. During the turmoil, the last Ming emperor hanged himself on a tree in the imperial garden outside the Forbidden City. Li Zicheng, a former minor Ming official, established a short-lived Shun dynasty. Qing Empire The Qing fell to the reign of Dorgon, who historians have called “the mastermind of the Qing conquest” and “the principal architect of the great Manchu enterprise.” Under Dorgon, the Qing eventually subdued the capital area, received the capitulation of Shandong local elites and officials, and conquered Shanxi and Shaanxi. Then, they turned their eyes to the rich commercial and agricultural region of Jiangnan south of the lower Yangtze River. They also wiped out the last remnants of rival regimes (Li Zicheng was killed in 1645). Finally, they managed to kill claimants to the throne of the Southern Ming in Nanjing (1645) and Fuzhou (1646), and they chased Zhu Youlang, the last Southern Ming emperor, out of Guangzhou (1647) and into the far southwestern reaches of China. Over the next half-century, all areas previously under the Ming dynasty were consolidated under the Qing. Xinjiang, Tibet, and Mongolia were also formally incorporated into Chinese territory. Between 1673 and 1681, the Kangxi Emperor suppressed the Revolt of the Three Feudatories: an uprising of three generals in Southern China who were denied hereditary rule of large fiefdoms granted by the previous emperor. In 1683, the Qing staged an amphibious assault on southern Taiwan, bringing down the rebel Kingdom of Tungning, which was founded by the Ming loyalist Koxinga in 1662 after the fall of the Southern Ming and had served as a base for continued Ming resistance in Southern China. The Qing defeated the Russians at Albazin, resulting in the Treaty of Nerchinsk. The Russians gave up the area north of the Amur River, as far as the Stanovoy Mountains, and kept the area between the Argun River and Lake Baikal. This border along the Argun River and Stanovoy Mountains lasted until 1860. The decades of Manchu conquest caused enormous loss of lives and the economy of China shrank drastically. In total, the Qing conquest of the Ming (1618 – 1683) cost as many as 25 million lives. The Ten Great Campaigns of the Qianlong Emperor from the 1750s to the 1790s extended Qing control into Central Asia. The early rulers maintained their Manchu ways and while their title was Emperor, they used khan to the Mongols and were patrons of Tibetan Buddhism. They governed using Confucian styles and institutions of bureaucratic government and retained the imperial examinations to recruit Han Chinese to work under or in parallel with Manchus. They also adapted the ideals of the tributary system in dealing with neighboring territories. The Qianlong reign (1735 – 96) saw the dynasty’s apogee and initial decline in prosperity and imperial control. The population rose to some 400 million, but taxes and government revenues were fixed at a low rate, virtually guaranteeing eventual fiscal crisis. Corruption set in, rebels tested government legitimacy, and ruling elites did not change their mindsets in the face of changes in the world system. Still, by the end of Qianlong Emperor’s long reign, the Qing Empire was at its zenith. China ruled more than one-third of the world’s population and had the largest economy in the world. By area alone, it was one of the largest empires ever. Government The early Qing emperors adopted the bureaucratic structures and institutions from the preceding Ming dynasty but split rule between Han Chinese and Manchus, with some positions also given to Mongols. Like previous dynasties, the Qing recruited officials via the imperial examination system until the system was abolished in 1905. The Qing divided the positions into civil and military positions. Civil appointments ranged from an attendant to the emperor or a Grand Secretary in the Forbidden City (highest) to prefectural tax collector, deputy jail warden, deputy police commissioner, or tax examiner. Military appointments ranged from a field marshal or chamberlain of the imperial bodyguard to third class sergeant, corporal, or first or second class private. The formal structure of the Qing government centered on the Emperor as the absolute ruler, who presided over six boards (Ministries); the Ministries were each headed by two presidents and assisted by four vice presidents. In contrast to the Ming system, however, Qing ethnic policy dictated that appointments be split between Manchu noblemen and Han officials who had passed the highest levels of the state examinations. The Grand Secretariat, a key policy-making body under the Ming, lost its importance during the Qing and evolved into an imperial chancery. The institutions inherited from the Ming formed the core of the Qing Outer Court, which handled routine matters and was located in the southern part of the Forbidden City. In order to keep routine administration from taking over the empire, the Qing emperors made sure that all important matters were decided in the Inner Court, dominated by the imperial family and Manchu nobility and located in the northern part of the Forbidden City. The core institution of the inner court was the Grand Council. It emerged in the 1720s under the reign of the Yongzheng Emperor as a body charged with handling Qing military campaigns against the Mongols, but soon took over other military and administrative duties and centralized authority under the crown. The Grand Councilors served as a sort of privy council to the emperor. Society Under the Qing Under Qing rule, the empire’s population expanded rapidly and migrated extensively, the economy grew, and arts and culture flourished, but the development of the military gradually weakened central government’s grip on the country. During the early and mid-Qing period, the population grew rapidly and was remarkably mobile. Evidence suggests that the empire’s expanding population moved in a manner unprecedented in Chinese history. Migrants relocated hoping for either permanent resettlement or at least in theory, a temporary stay. The latter included the empire’s increasingly large and mobile manual workforce, its densely overlapping internal diaspora of merchant groups, and the movement of Qing subjects overseas, largely to Southeastern Asia, in search of trade and other economic opportunities. The Qing society was divided into five relatively closed estates. The elites consisted of the estates of the officials, the comparatively minuscule aristocracy, and the intelligentsia. There also existed two major categories of ordinary citizens: the “good” and the “mean.” The majority of the population belonged to the first category and were described as liangmin, a legal term meaning good people, as opposed to jianmin meaning the mean (or ignoble) people. Qing law explicitly stated that the traditional four occupational groups of scholars, farmers, artisans, and merchants were “good,” and they could have the status of commoners. On the other hand, slaves or bonded servants, entertainers (including prostitutes and actors), and low-level employees of government officials were the “mean” people, and they were considered legally inferior to commoners. Economy By the end of the 17th century, the Chinese economy had recovered from the devastation caused by the wars in which the Ming dynasty was overthrown. In the 18th century, markets continued to expand but with more trade between regions, a greater dependence on overseas markets, and a greatly increased population. After the re-opening of the southeast coast, which was closed in the late 17th century, foreign trade was quickly re-established and expanded at 4% per annum throughout the latter part of the 18th century. China continued to export tea, silk, and manufactures, which resulted in a large, favorable trade balance with the West. The resulting inflow of silver expanded the money supply, facilitating the growth of competitive and stable markets. The government broadened land ownership by returning land that was sold to large landowners in the late Ming period by families unable to pay the land tax. To give people more incentives to participate in the market, the tax burden was reduced in comparison with the late Ming and the corvée system replaced with a head tax used to hire laborers. A system of monitoring grain prices eliminated severe shortages and enabled the price of rice to rise slowly and smoothly through the 18th century. Wary of the power of wealthy merchants, Qing rulers limited their trading licenses and usually banned new mines, except in poor areas. Some scholars see these restrictions on the exploitation of domestic resources and limits imposed on foreign trade as a cause of the Great Divergence by which the Western world overtook China economically. By the end of the 18th century the population had risen to 300 million from approximately 150 million during the late Ming dynasty. This rise is attributed to the long period of peace and stability in the 18th century and the import of new crops China received from the Americas, including peanuts, sweet potatoes, and maize. New species of rice from Southeast Asia led to a huge increase in production. Merchant guilds proliferated in all of the growing Chinese cities and often acquired great social and even political influence. Rich merchants with official connections built up huge fortunes and patronized literature, theater, and the arts. Textile and handicraft production boomed. Military The early Qing military was rooted in the Eight Banners first developed by Nurhaci to organize Jurchen society beyond petty clan affiliations. The banners were differentiated by color. The yellow, bordered yellow, and white banners were known as the Upper Three Banners and remained under the direct command of the emperor. The remaining banners were known as the Lower Five Banners. They were commanded by hereditary Manchu princes descended from Nurhachi’s immediate family. Together, they formed the ruling council of the Manchu nation, as well as the high command of the army. Nurhachi’s son Hong Taiji expanded the system to include mirrored Mongol and Han Banners. After capturing Beijing in 1644, the relatively small Banner armies were further augmented by the Green Standard Army, made up of Ming troops who had surrendered to the Qing. They eventually outnumbered Banner troops three to one. They maintained their Ming-era organization and were led by a mix of Banner and Green Standard officers. Banner armies were organized along ethnic lines, namely Manchu and Mongol, but they included non-Manchu bonded servants registered under the household of their Manchu masters. During Qianlong’s reign, the Qianlong Emperor emphasized Manchu ethnicity, ancestry, language, and culture in the Eight Banners, and in 1754 started a mass discharge of Han bannermen. This led to a change from Han majority to a Manchu majority within the Eight Banner system. The eventual decision to turn the banner troops into a professional force led to its decline as a fighting force. After a series of military defeats in the mid-19th century, the Qing court ordered a Chinese official, Zeng Guofan, to organize regional and village militias into an emergency army. He relied on local gentry to raise a new type of military organization that became known as the Xiang Army, named after the Hunan region where it was raised. The Xiang Army was a hybrid of local militia and a standing army. It was given professional training, but it was paid for out of regional coffers and funds its commanders could muster. Commanders of the local militia were mostly members of the Chinese gentry. The Xiang Army and its successor, the Huai Army, created by Zeng Guofan’s colleague and mentee Li Hongzhang, were collectively called the Yong Ying (Brave Camp). The Yong Ying system signaled the end of Manchu dominance in Qing military establishment. The fact that the corps were financed through provincial coffers and were led by regional commanders weakened central government’s grip on the whole country. This structure fostered nepotism and cronyism among its commanders, who laid the seeds of regional warlordism in the first half of the 20th century. Arts and Culture Under the Qing, traditional forms of art flourished and innovations developed rapidly. High levels of literacy, a successful publishing industry, prosperous cities, and the Confucian emphasis on cultivation all fed a lively and creative set of cultural fields. The Qing emperors were generally adept at poetry, often skilled in painting, and offered their patronage to Confucian culture. The Kangxi and Qianlong emperors, for instance, embraced Chinese traditions both to control the people and proclaim their own legitimacy. Imperial patronage encouraged literary and fine arts, as well as the industrial production of ceramics and Chinese export porcelain. However, the most impressive aesthetic works were by the scholars and urban elite. Calligraphy and painting remained a central interest to both court painters and scholar-gentry who considered the arts part of their cultural identity and social standing. Literature grew to new heights in the Qing period. Poetry continued as a mark of the cultivated gentleman, but women wrote in larger numbers and poets came from all walks of life. The poetry of the Qing dynasty is a field studied (along with the poetry of the Ming dynasty) for its association with Chinese opera, developmental trends of classical Chinese poetry, the transition to a greater role for vernacular language, and poetry by women in Chinese culture. In drama, the most prestigious form became the so-called Peking opera, although local and folk opera were also widely popular. Even cuisine became a form of artistic expression. Works that detailed the culinary aesthetics and theory, along with a wide range of recipes, were published. The Qing emperors generously supported the arts and sciences. For example, the Kangxi Emperor sponsored the Peiwen Yunfu, a rhyme dictionary published in 1711, and the Kangxi Dictionary published in 1716, which remains to this day an authoritative reference. The Qianlong Emperor sponsored the largest collection of writings in Chinese history, the Siku Quanshu, completed in 1782. Court painters made new versions of the Song masterpiece, Zhang Zeduan’s Along the River During the Qingming Festival, whose depiction of a prosperous and happy realm demonstrated the beneficence of the emperor. By the end of the 19th century, all elements of national artistic and cultural life recognized and began to come to terms with world culture as found in the West and Japan. Whether to stay within old forms or welcome Western models was now a conscious choice rather than an unchallenged acceptance of tradition. Attributions CC LICENSED CONTENT, SHARED PREVIOUSLY - Curation and Revision. Provided by: Boundless.com. License: CC BY-SA: Attribution-ShareAlike CC LICENSED CONTENT, SPECIFIC ATTRIBUTION - Title Image: 440px-Ch'iu_Ying_001.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - Hongwu Emperor. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Hongwu_Emperor. License: CC BY-SA: Attribution-ShareAlike - History of the Ming dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Ming dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Hongwu1.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - Economy of the Ming dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Hongwu Emperor. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Ming dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Hongwu1.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - 440px-Ch'iu_Ying_001.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - History of the Ming dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Haijin. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Economy of the Ming dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Hongwu1.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - 440px-Ch'iu_Ying_001.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - 1024px-Matteo_Ricci_Far_East_1602_Larger.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - Wokou.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - Ming dynasty painting. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Ming dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Hongwu1.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - 440px-Ch'iu_Ying_001.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - 1024px-Matteo_Ricci_Far_East_1602_Larger.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - Wokou.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - 800px-thumbnail.jpg. Provided by: Wikipedia. License: Public Domain: No Known Copyright - 1280px-Chen_Hongshou,_leaf_album_painting.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - Qing conquest of the Ming. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - History of the Ming dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Ming dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Hongwu1.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - 440px-Ch'iu_Ying_001.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - 1024px-Matteo_Ricci_Far_East_1602_Larger.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - Wokou.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - 800px-thumbnail.jpg. Provided by: Wikipedia. License: Public Domain: No Known Copyright - 1280px-Chen_Hongshou,_leaf_album_painting.jpg. Provided by: Wikimedia. License: Public Domain: No Known Copyright - 440px-u6e05_u4f5au540d_u300au6e05u592au7956u5929u547du7687u5e1du671du670du50cfu300b.jpg. Provided by: Wikipedia. License: Public Domain: No Known Copyright - Shanhaiguan.gif. Provided by: Wikimedia. Located at: https://commons.wikimedia.org/wiki/File:Shanhaiguan.gif. License: Public Domain: No Known Copyright - Seven Grievances. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Manchu people. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Forbidden City. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Battle of Shanhai Pass. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - History of China. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Ten Great Campaigns. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Qing dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Dorgon. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Revolt of the Three Feudatories. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Ming dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Treaty of Nerchinsk. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Qing conquest of the Ming. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Eight Banners. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Li Zicheng. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Dorgon_the_Prince_Rui_17th_century.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Shanhaiguan.gif. Provided by: Wikimedia Commons. Located at: https://commons.wikimedia.org/wiki/File:Shanhaiguan.gif. License: Public Domain: No Known Copyright - Yong Ying. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Qing dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Xiang Army. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Eight Banners. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Great Divergence. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Green Standard Army. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Dorgon_the_Prince_Rui_17th_century.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Shanhaiguan.gif. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Emperor_qianlong_blue_banner.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - u6e05u4ee3u5b98u5ba6u4ebau5bb6u5c45u5bb6u751fu6d3bu5716u96c6._u5b98u4ebau9032u8aeb.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - First Opium War. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Kowtow. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - List of tributaries of Imperial China. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Qing dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Imperial Chinese tributary system. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Foreign relations of imperial China. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Thirteen Factories. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Opium Wars. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - East India Company. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Dorgon_the_Prince_Rui_17th_century.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Shanhaiguan.gif. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Emperor_qianlong_blue_banner.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - u6e05u4ee3u5b98u5ba6u4ebau5bb6u5c45u5bb6u751fu6d3bu5716u96c6._u5b98u4ebau9032u8aeb.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - View_of_Canton_factories_2.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - 1024px-Canton_factories.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Treaty of Nanking. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Treaty of Aigun. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Bengal Presidency. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - First Opium War. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Qing dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - History of opium in China. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - History of China. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Century of humiliation. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Opium Wars. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Second Opium War. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Treaty of Tientsin. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Treaty_of_Tientsin. License: CC BY-SA: Attribution-ShareAlike - Convention of Peking. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Dorgon_the_Prince_Rui_17th_century.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Shanhaiguan.gif. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Emperor_qianlong_blue_banner.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - u6e05u4ee3u5b98u5ba6u4ebau5bb6u5c45u5bb6u751fu6d3bu5716u96c6._u5b98u4ebau9032u8aeb.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - View_of_Canton_factories_2.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - 1024px-Canton_factories.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Two_poor_Chinese_opium_smokers._Gouache_painting_on_rice-pap_Wellcome_V0019165.jpg. Provided by: Wikimedia Commons. License: CC BY: Attribution - China_Opium_smokers_by_Lai_Afong_c1880.JPG. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Punti-Hakka Clan Wars. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Anti-Qing sentiment. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - First Opium War. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Millenarianism. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Taiping Rebellion. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Nian Rebellion. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Dungan Revolt (1862u201377). Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Qing dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Boxer Rebellion. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Panthay Rebellion. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - History of China. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Dorgon_the_Prince_Rui_17th_century.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Shanhaiguan.gif. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Emperor_qianlong_blue_banner.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - u6e05u4ee3u5b98u5ba6u4ebau5bb6u5c45u5bb6u751fu6d3bu5716u96c6._u5b98u4ebau9032u8aeb.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - View_of_Canton_factories_2.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - 1024px-Canton_factories.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Two_poor_Chinese_opium_smokers._Gouache_painting_on_rice-pap_Wellcome_V0019165.jpg. Provided by: Wikimedia Commons. License: CC BY: Attribution - China_Opium_smokers_by_Lai_Afong_c1880.JPG. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Hong_Xiuquan.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Zeng_Guofan.png. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Gaselee Expedition. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - First Sino-Japanese War. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Seymour Expedition. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - New Policies. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Beijing Legation Quarter. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Beijing_Legation_Quarter. License: CC BY-SA: Attribution-ShareAlike - Juye Incident. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Qing dynasty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Hundred Days' Reform. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Big Swords Society. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - History of China. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Boxer Protocol. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Boxer Rebellion. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Dorgon_the_Prince_Rui_17th_century.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Shanhaiguan.gif. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Emperor_qianlong_blue_banner.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - u6e05u4ee3u5b98u5ba6u4ebau5bb6u5c45u5bb6u751fu6d3bu5716u96c6._u5b98u4ebau9032u8aeb.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - View_of_Canton_factories_2.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - 1024px-Canton_factories.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Two_poor_Chinese_opium_smokers._Gouache_painting_on_rice-pap_Wellcome_V0019165.jpg. Provided by: Wikimedia Commons. License: CC BY: Attribution - China_Opium_smokers_by_Lai_Afong_c1880.JPG. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Hong_Xiuquan.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Zeng_Guofan.png. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - 800px-China_imperialism_cartoon.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - 800px-Boxer1900.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Nine-Power Treaty. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Open Door Policy. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - William Woodville Rockhill. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Lansingu2013Ishii Agreement. Provided by: Wikiepdia. License: CC BY-SA: Attribution-ShareAlike - Shandong Problem. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Dorgon_the_Prince_Rui_17th_century.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Shanhaiguan.gif. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Emperor_qianlong_blue_banner.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - u6e05u4ee3u5b98u5ba6u4ebau5bb6u5c45u5bb6u751fu6d3bu5716u96c6._u5b98u4ebau9032u8aeb.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - View_of_Canton_factories_2.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - 1024px-Canton_factories.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Two_poor_Chinese_opium_smokers._Gouache_painting_on_rice-pap_Wellcome_V0019165.jpg. Provided by: Wikimedia Commons. License: CC BY: Attribution - China_Opium_smokers_by_Lai_Afong_c1880.JPG. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Hong_Xiuquan.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - Zeng_Guofan.png. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - 800px-China_imperialism_cartoon.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - 800px-Boxer1900.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright - William_Woodville_Rockhill.jpg. Provided by: Wikimedia Commons. Located at: https://commons.wikimedia.org/wiki/File:William_Woodville_Rockhill.jpg. License: Public Domain: No Known Copyright - Putting_his_foot_down.jpg. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright
oercommons
2025-03-18T00:35:07.476920
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87813/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87814/overview
Aboriginal Australians Overview Earth's Most Ancient and Isolated Civilization: Aboriginal Australians Far south of the equator rests Australia. The only country in the world to also be a continent and an island. Bordered to the west by the Indian Ocean, and the Pacific to the east, Australia is a distant continent rife with unique plant and animal life. The country’s distance from other continents and nations has led scientists and historians alike to speculate about how and when the country’s inhabitants, the Aborigines, first arrived on the hottest, flattest continent in the world. Learning Objectives Examine early Aboriginal life in Australia and understand why they are considered some of the oldest cultures in the world. Key Terms / Key Concepts Aborigine: broad term used to describe Australia’s many different linguistic and cultural groups who have inhabited the continent since the Pleistocene era Australia: smallest of the seven continents, located in the South Pacific Micronesia: subregion of Oceania east of the Philippines that includes hundreds of islands, including the Marshall Islands Polynesia: subregion of Oceania east comprised of thousands of islands, including Hawaii and New Zealand Arrival of the Aborigines Aborigine is a broad term that describes more than over four-hundred linguistic and cultural groups whose ancestors predated white European settlement of Australia by nearly fifty thousand years. These groups were often as diverse as they were numerous, and traditions varied significantly. One question that has arisen about these ethnically and linguistically diverse peoples is, “How did they get here, and who were they before?” Scholars and research teams have produced several theories based on historical, archeological, climatological, and geographic studies. The most recent, and widely accepted theory is that Aborigines are peoples who originated in present-day Indonesia, Micronesia, and Southeast Asia. During a period in which sea levels fell, these groups crossed land bridges and shallow seas to Australia. Oral histories passed down by the Yolngu tribe in present-day Arnhem Land in the Northwest Territory of Australia recount the migration of Aborigine clans who crossed the Sunda continental shelf from parts of Indonesia and Micronesia and then passed into the Australian continent. In the twentieth century, archeological evidence confirmed the theory of the migration of humans from Southeast Asia to Australia during the Pleistocene Era. Later they migrated into parts of Polynesia. Aborigines Spread through Australia Aborigines were communal peoples who practiced polygamy. Instead of a strict nuclear family, the family structure consisted of an extended tribal group that could include 100 – 2,000 people. Tribes were built from many of these groups coming together. Before European arrival in Australia, Aborigine tribes could reach populations of 300 – 500,000. Most tribes shared some common features, but there were also significant differences as well. Aborigines were hunter-gatherers whose diets consisted of fish, shellfish, turtles, lizards, and a variety of plants. The centrality of hunting meant that territorial claims were fiercely defended, and intertribal warfare was not uncommon. Reflected in the extensive symbols used in their cave art is the importance of spirituality to the ancient Aborigine peoples. Within their religious beliefs and practices is an ancient ritual called, The Dreaming. For the Aborigines, The Dreaming was a range of views about the connection of life, including human life, to nature and the world. Life emerged from earth and water, and an ancient kinship exists between people and ancestral creators. Hundreds of languages evolved among the Aborigine tribes as they fanned out across Australia’s coastal regions, rivers, bush, and outback. The most often spoken languages belonged to the Pama-Nyungen language family of northern Australia. Like many early cultures, Aborigines valued oral traditions, stories, and customs over written words. Because of these practices, little written work about Aboriginal culture survives before European settlement. To understand their story, we instead turn to the wealth of surviving archeological evidence such as dwellings and cave paintings. Lack of written sources produced challenges for scholars. Competing theories about Aborigine origins and histories have emerged. In many ways, the Aborigines are a broad group of people whose story, while ancient, is still being uncovered. What remains certain though is that they have occupied Australia for over 50,000 years. They remain one of the oldest, most isolated, and unchanged cultures in the world. Attributions Images from Wikimedia Commons. Matsuda, Matt K. Pacific Worlds. Cambridge University Press, 2012. 162-163. Welsh, Frank. Australia : a New History of the Great Southern Land. Overlook Press, 2006. 20-23. Lilley, Ian. Archaeology of Oceania : Australia and the Pacific Islands. Blackwell, 2006. 60.
oercommons
2025-03-18T00:35:07.500598
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87814/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87816/overview
Islam and Eurasia: An Introduction Overview Islam and Eurasia: An Introduction The relationship between Islam and Eurasia is defined by a number of factors. Among the most significant is the missionary impulse of Islam, which drove Islamic expansion across Eurasia. Conversely, limits to that expansion included the missionary impulse of other religions and belief systems, including Buddhism, Christianity, and Hinduism. In part, Geography influenced where Islam spread, either facilitating or hindering that process. Tribalism strengthened group identification with and loyalty to Islam, among other ideologies and/or belief systems. As an example, such tribalism among the pastoral peoples who first embraced Islam was part of Islam’s early resilience. Other factors included economics, which in a number of cases acted as an imperative in its spread. Learning Objective - Assess the contributions of these three empires to the early-modern world. During the seventh century C.E. Islam spread across Arabia. In the process of expanding across Eurasia, Islam—and other expansionistic belief systems—spread by means of military force, economic incentive, and ideological and/or spiritual appeal. For the next several centuries Islam spread westward across north Africa and into southern Europe, northward into west Asia, and eastward into south and central Asia. With the diversification of the expanding Muslim world, that world became multipolar, with numerous centers of political and military power, as well cultural creativity. These centers punctuated the corridors of Islam’s spread. They also competed with each other. Since the seventh century C.E. Islam has been one of a number of forces that have influenced Eurasian history. Another way to look at Islam’s role is as part of a dynamic process in which each of the forces is continually evolving. Part of what historians do is try to paint a picture of a period, along with all the individuals, groups, and forces that are part of that period. That, of course, is one of the greatest challenges that historians face—describing a past essentially in static terms that is the sum of all the dynamic developments and events of which it is composed. This is the case with the effort to understand the development and expansion of Islam. Islam, among other expansionistic belief systems, ultimately was and is a complex and fluid force developing in the complex and fluid landscape that is Eurasia. Attributions Title Image - Taj Mahal photo by Vyacheslav Argenberg, via Wikimedia Commons. Attribution: https://commons.wikimedia.org/wiki/File:Taj_Mahal_2,_Agra,_India.jpg. Provided by: Wikipedia. Located at:https://commons.wikimedia.org/wiki/File:Taj_Mahal_2,_Agra,_India.jpg . License: Creative Commons Attribution 4.0 International.
oercommons
2025-03-18T00:35:07.515975
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87816/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87817/overview
Origins of the Ottoman, the Safavid, and the Mughal Empires Overview Origins of the Ottoman, the Safavid, and the Mughal Empires The founding of the Ottoman, the Safavid, and the Mughal empires occurred within a number of historic contexts, including the expansion and intersection of Islam and Turco-Mongol power across Asia. The founder of each empire followed a tradition and imperative of conquest most famously practiced and immortalized by Genghis Khan during the late twelfth and early thirteenth centuries. Khans would-be successors continued this tradition, including the fourteenth-century west Asian conqueror Tamerlane. The goal of these conquerors was the conquest of everything in sight, with the world as the ultimate goal. While that was not a practical goal for Mughal, Ottoman, or Safavid rulers, global conquest was at least worthy of lip service. The founder of each of these three empires embraced conquest without question, practicing it as part of a cultural norm or ritual, like members of a fraternity mindlessly getting drunk. They didn’t think much beyond conquest. Accordingly, each founder of these three empires also came from a lineage of conquerors. Learning Objectives - Describe the origins of the Ottoman, Safavid, and Mughal empires as outgrowths of Turco-Mongol power in late medieval Eurasia. Key Terms and Concepts Osman: founding ruler of the Ottoman empire, and the source of its name, he led a small kingdom in northwest Anatolia, the core or homeland of the Ottoman empire Ismail: founding ruler of the Safavid empire Babur: founding ruler of the Mughal empire Along with being products of a culture of conquest, the Mughal, Ottoman, and Safavid empires all had viable territorial cores. The Ottoman empire made Anatolia its base, Ismail established the Safavid dynasty by taking over Persia, and Babur forged the Mughal empire by taking over northern and central India. Osman was the first of these three empire builders. In the early fourteenth century, he founded what would become the Ottoman empire in northwest Anatolia. His people were one of a number of Turkish peoples who lived across Asia. The Ottoman empire rose in power with the decline of the Seljuk Turks during the fourteenth century. From that territorial base the Ottoman Turks expanded westward across Anatolia, to the Bosporus and Dardanelle Straits, and then across those straits into southeastern Europe. In the process of this expansion, they conquered the Orthodox Christian Byzantine empire, with the capture of Constantinople in 1453, known in the Ottoman empire as Istanbul. Ismail established the Safavid empire through his conquest of part of Persia, the core of the Safavid empire. Babur, who founded the Mughal empire, was descended from both Genghis Khan and Tamerlane. Babur was the latest in a succession of invaders who had attempted to conquer India. During the early eleventh century Mahmud—ruler of the Ghazni state in what is now present-day Afghanistan—moved east across the Indus River into northeastern India. From the thirteenth through the sixteenth centuries the Delhi Sultanate controlled northern India. While each made progress into northern India, they were stopped from advancing into southern India by various Hindu states. Hinduism had emerged as the primary religion of India with the development of Aryan culture. This animosity between Hindus and Muslims was also the defining division of the Mughal empire. Each of these three empires reached its apex during the sixteenth and/or seventeenth centuries, and then began to decline, each at its own pace. Similar sets of factors contributed to the rise and then the fall of each. During the eighteenth century the Safavid empire disintegrated, and the Mughal empire was eclipsed by advancing British interests. The Ottoman empire held on for nearly two centuries before giving way to a final nationalist movement after the First World War, which resulted in the establishment of Turkey. This expansion distinguished the Ottoman empire from the Mughal and the Safavid empires in that the Ottoman empire was the only one of the three to expand into Europe, influencing various southeast European peoples. The Ottoman empire’s expansion constituted a conduit through which Islam spread. The Ottoman empire also came to incorporate more ethnically and religiously diverse peoples as part of this expansion. In their study of empires historians invariably pose the question of when an empire peaks. This question cannot be answered objectively. Any answer is a matter of interpretation. For example, the apex of an empire’s fortunes could be measured in years, decades, or even centuries, as represented by the metaphoric peak or plateau. Arguably the Ottoman empire peaked during the sixteen and the seventeenth centuries, as marked by its two most ambitious military operations: the Ottoman sieges of Vienna in 1529 and 1683. The failures of both sieges define the high point of Ottoman imperial power, although that was not immediately recognized as such. With respect to its longer and more gradual decline, beginning arguably at the end of the seventeenth century, the Ottoman stood apart from the Mughal and the Safavid empires. Safavid expansion was constrained by the Ottoman empire to the west and the Mughal empire to the east. For Safavid rulers, expansion was more a matter of regaining lost ground than taking over new territory, such as an entire empire outside of the Islamic world. These territorial struggles were exacerbated by the division between the Shi’ite identity of the Safavid empire and the Sunni predominance in the Mughal and the Ottoman empires. The reign of Akbar(1556 – 1605) marked the apex of the Mughal empire. Akbar implemented the most ambitions in the effort to bring together Hindus and Muslims in a harmonious synthesis that would be a new type of Indian civilization. Arguably India still has not achieved that today, as made clear by the 1947 partition of Muslims and Hindus into the newly independent nations of Pakistan and India. Hindu kingdoms across southern India blocked Mughal expansion. The similarities and parallels in the origins of these three empires continue with their development and fates. These west Asian empires represent a different direction of development and orientation during the early modern period. In their origins lie the seeds of these distinctions, along with the seeds of their failure to keep up with the emerging European powers. Attributions Images courtesy of Wikipedia Commons Title Image - Painting of Shah Ismail I by unknown medieval Venetian artist. Attribution: Uffizi, Public domain, via Wikimedia Commons. Provided by: Wikipedia. Location: https://commons.wikimedia.org/wiki/File:Shah_Ismail_I.jpg. License: CC BY-SA: Attribution-ShareAlike Licenses and Attributions CC LICENSED CONTENT, SHARED PREVIOUSLY - Curation and Revision. Provided by: Boundless.com. License: CC BY-SA: Attribution-ShareAlike CC LICENSED CONTENT, SPECIFIC ATTRIBUTION - History of the Ottoman Empire. Provided by: Wikipedia. Location: https://en.wikipedia.org/wiki/History_of_the_Ottoman_Empire. License: CC BY-SA: Attribution-ShareAlike - Decline and modernization of the Ottoman Empire. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Ottoman Empire. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Foreign relations of the Ottoman Empire. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Eastern Question. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike
oercommons
2025-03-18T00:35:07.535098
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87817/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87818/overview
Shiite Islam and Safavid Dynasty Overview Shiite Islam and the Safavid Dynasty Learning Objectives - Identify the historical significance of the conversion of Iran to a Shiite Islamic state under the Safavid Dynasty (1501-1722). - Assess the contributions of the Safavid Dynasty to the early modern world. Key Terms / Key Concepts Ismail: founding ruler of the Safavid empire Safavid Dynasty: dynasty of Turkic tribal leaders who ruled Persia from the sixteenth into the eighteenth centuries Abbas: ruler at the apex of Safavid power Isfahan: Safavid capital city In 1501 Shah Ismail influenced the course of southwest Asian history by establishing a new dynasty over Persia and rechristening the state Iran. The Safavid Dynasty, or empire, was one of three Muslim land-based empires in southwest Asia; the other two were the Ottoman empire, with Anatolia as its base, and the Mughal empire, which controlled India. These empires—also known as the “gunpowder empires”—rose, peaked, and declined between the fourteenth and the twentieth centuries, against the backdrop of the rise of the West. By the eighteenth century major Western powers had eclipsed these three empires, both economically and technologically. The Safavids The Safavids were a hereditary dynasty of Turkic tribal leaders who established their autonomy during fourteenth century with the decline of the Persian Khanate—which was one of the successor empires to Genghis Khan’s thirteenth-century trans-Asian empire. The decline of the Persian Khanate was part of the larger downfall of these Mongol empires across Asia, leaving a power vacuum in west Asia that was temporarily filled with various tribal confederations. The name of the Safavid dynasty comes from the first Safavid leader: Safi al-Din. Toward the end of increasing their power subsequent Safavid leaders intermarried into the tribal elites of various west Asian peoples, including Circassians, Pontic Greeks of Anatolia, Georgians, and Turkmens—which was one of a number of Turkic groups. Shah Ismail and Shiism The Safavid conquest of Persia was the product of a family struggle followed by a conventional campaign of territorial conquest. Ismail, a younger son of the late fifteenth-century Safavid shah, established himself as the dynastic leader in a successful struggle with other members of his extended family. He then installed the Safavid dynasty over Persia during the early sixteenth century. In 1501 his forces took control of Azerbaijan, and over the rest decade he expanded and consolidated his empire through a succession of conquests across Persia. In the context of Persian history, the Safavid Dynasty was one in a succession of dynasties, going back to the Achaemenid Dynasty, which ruled over Persia. While accepting the Persian cultural base, the Safavid Dynasty took Persian civilization in a new direction by embracing Shiism, and, in the process, diverging from the Sunni majorities among the Muslims of the Mughal and the Ottoman empires. This is significant because Shiites and Sunnis disagree over a number of issues concerning Islamic doctrine, as well as the selection of the Islamic leader known as the caliph. As the founder of the Safavid Dynasty Ismail initiated this process of embracing Shiism with his conquest of Persia. Ottoman Sultan Selim I invaded the northwestern corner of the Safavid empire in 1514, culminating in the Ottoman victory over Safavid forces at Tabriz. Although Selim could not maintain control of this part of the Safavid realm, the animosity between these two empires continued, punctuated by formal hostilities until the end of the Safavid Dynasty. This Safavid Shiite divergence metastasized into a violent religious and political division that manifested itself in numerous wars that exist to the present, including the 1980 – 88 Iran-Iraq War. Shah Abbas the Great The Safavid Dynasty reached the zenith of its power during the reign of Shah Abbas the Great. He centralized and strengthened the Safavid government and military, allowing the latter to compete more effectively with the Ottoman empire. When he came to power, Abbas restored the declining Safavid empire and took steps to increase Safavid power, relative to Mughal and Ottoman power. He initiated military campaigns against both powers during the early seventeenth century, regaining some of the territory previously lost by the Safavid empire. These campaigns, along with a strike against the Portuguese at Hormuz increased the Safavid presence along the Persian Gulf. To improve commerce and security across his empire Abbas also commissioned a network of roads with caravanserai constructed about every twenty miles. Caravanserai were secure facilities were caravans could stay overnight. At Abbas’s instruction government officials also worked with merchant groups to encourage trade, which was challenged by Persian geography at the time. Isfahan Every empire that aspires to be a great empire needs a great capital city. Such a capital city not only serves as the center of government but also as the focus of the empire, the source of culture and political power, and the symbol of the empire’s stability, power, and growth. Isfahan was that capital city for the Safavid empire, and, in this role, was comparable to Istanbul/Constantinople in the Ottoman empire and Agra in the Mughal empire. As a settlement, the roots of Isfahan go back about four thousand years. During the Achaemenid dynasty, about 2500 years ago, Isfahan emerged as a small city; it subsequently served as a regional center through a succession of imperial periods in Persian history, including during Parthian, Sassanid, Abbasid, and Timur’s rule. Isfahan was known for its ethnic diversity. Shah Abbas made Isfahan the capital city of his Safavid empire in 1598. Abbas and his successors sponsored numerous projects in the city. These projects embodied both Islamic and Persian features in their design and construction. Of the Safavid rulers Abbas I had the most ambitious plans for Isfahan, matching his ambitions for his empire. Initially he intended to renovate portions of the existing city, but in order to avoid opposition, he later decided instead to add on to it with new construction in the south. Two main features of Abbas’s “new” city to the south were the Maidan-I Naqsh-I Jahan—the center of the “new” city—and the Chahar Bagh Avenue, which ran through the “new” portions of Isfahan to the old. In 1647 Shah Abbas II, grandson of Shah Abbas I, had the Chihil Sutun palace completed. During the Safavid period the city grew with the arrival of thousands of migrants from the Caucasus, who were welcomed by Safavid rulers, including Abbas. These migrants made the city more ethnically and culturally diverse. Diversity was a characteristic of a number of imperial dynasties in Persian history, including the Achaemenid Dynasty. Eventually, by one count, Isfahan could boast 600,000 residents, 1802 caravanserais, 162 mosques, 273 public baths, 48 colleges and academies, and an indeterminable number of coffeehouses. Isfahan remained the urban center of the Safavid empire until its downfall. In this capacity it attracted the interest of European travelers, as an extension of their grand tours. The “Grand Tour” occurred during the so-called early modern period and was a ritual of wealthy Europeans, mostly the nobility. European travelers from the period after the Thirty Years War to the beginning of the French Revolution—a period of relative stability in Europe—toured European cities, along with sites in Asia, for the purposes of exposure and education. The Grand Tour was the predecessor to the mass tourism that grew out of the mass production of the Industrial Revolution. Decline and Fall of the Safavid Empire When considering questions concerning the fall of any empire, civilization, and/or culture, such questions can be phrased in terms of why a civilization fell or why it lasted as long as it did. In the early eighteenth century, the Safavid Dynasty became the first of the so-called gunpowder empires to collapse. A number of factors contributed to the Safavid empire lasting into the early eighteenth century, including Shiism as a unifying force, adequate government administration, commercial prosperity, internal tranquility, and the absence of acute and existential foreign threats, including the Mughal and Ottoman empires. From a more pessimistic perspective a number of factors contributed to the decline and collapse of the Safavid Dynasty. The Safavid Dynasty did not create the financial infrastructure necessary for economic development. In addition, it did not keep up with the innovations in military and maritime technology being made by various European powers. These European powers had embraced earlier technological advances and inventions from Asia and had improved upon them. In general terms these factors also contributed to the downfall of the Mughal and the Ottoman empires. Relative to the Mughal and the Ottoman empires, the Safavid empire fell behind in terms of trade and experienced an outflow of silver, along with growing domestic instability. In the early 1720s, Afghans overran the Safavid empire, capturing Isfahan, bringing down the Safavid dynasty in 1723. Temporarily restored in the 1730s by Nadir Khan, an adventurer-conqueror, this reincarnation died with Khan in 1747. The Safavid empire vividly illustrated the weaknesses of the three so-called gunpowder empires in the face of early modern European technologic advances and economic and imperial expansion. The fate that befell the Safavid empire occurred in slower motion compared to the Mughal empire later in the eighteenth century, as well as to the Ottoman empire during the early twentieth century in the aftermath of the First World War. Arguably the Safavid dynasty left as its most momentous legacy its sponsorship of Shiism Islam. Attributions Licenses and Attributions CC LICENSED CONTENT, SHARED PREVIOUSLY - Curation and Revision. Provided by: Boundless.com. License: CC BY-SA: Attribution-ShareAlike CC LICENSED CONTENT, SPECIFIC ATTRIBUTION - Title Image - "Joseph Enthroned from a Falnama (Book of Omens), the Iranian Safavid Dynasty, circa 1550". Attribution: .Arthur M. Sackler Gallery, Public domain, via Wikimedia Commons. Provided by: Wikipedia. Location: https://upload.wikimedia.org/wikipedia/commons/8/8b/Safavid_Dynasty%2C_Joseph_Enthroned_from_a_Falnama_%28Book_of_Omens%29%2C_circa_1550_AD.jpg. License: CC BY-SA: Attribution-ShareAlike - History of the Ottoman Empire. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/History_of_the_Ottoman_Empire. License: CC BY-SA: Attribution-ShareAlike - Decline and modernization of the Ottoman Empire. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Ottoman Empire. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Foreign relations of the Ottoman Empire. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Eastern Question. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike
oercommons
2025-03-18T00:35:07.556247
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87818/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87819/overview
Decline of the Ottoman, Safavid, and Mughal Empires Overview Decline of the Ottoman, the Safavid, and the Mughal Empires The decline of the Mughal, the Ottoman, and the Safavid Empires confirmed the advantages enjoyed by the West as a whole, as well as select European powers in particular. By the late nineteenth century, the Ottoman Empire was being referred to as the “sick man of Europe,” a reference that also could have been applied to the Mughal and the Safavid empires, had each not already expired. European advantages stemmed from Western industrialization and advances in military and maritime technology, along with the organizational improvements and innovations that accompanied industrialization. Learning Objectives - Describe the internal factors that led to decline in the Ottoman, Safavid, and Mughal Empires from the seventeenth through the nineteenth centuries and show how the growing commercial and military power of European nations facilitated that decline. - Define the term “Gunpowder Empire” and evaluate whether the Mughal, the Ottoman, and the Safavid empires should be defined as one. Key Terms / Key Concepts Battle of Lepanto - 1571 naval engagement between the Ottoman Empire and the Holy League, won by the latter, and marking the beginning of the decline of the former gunpowder empire - term referring to the Mughal, Ottoman, and Safavid empires Safavid Decline The Safavid empire was the first of the three gunpowder empires to collapse, falling to Afghan forces during the early eighteenth century. Shi’ites dedicated to Shi’ite domination of Persia paved the way for this fate. During the seventeenth and into the early eighteenth centuries, Shi’ite efforts to curtail freedom of expression and even limit freedom of religion within the confines of Islam prompted local, grassroots resistance. In the early eighteenth century, Afghans took advantage of this widespread unrest to seize Isfahan, the Safavid capital. The dynasty ended in 1723, although its remnants were restored temporarily under Nadir Shah Afshar—founder of the short-lived Afsharid empire—but were extinguished at the end of the eighteenth century. During the nineteenth and the first half of the twentieth century the sovereignty of the Qajar empire—successor to the Afsharid empire—was continually compromised by the British and the Russians, along with other Western imperial powers. This fate illustrated the inherent vulnerability of the Safavid empire. The Reign of Aurangzeb and the Decline of the Empire The decline of the Mughal empire, the second of these three empires to fall, was a more gradual process driven more directly by European imperialism, particularly British and French expansion. The last of the great Mughals was Aurangzeb Alamgir. During his fifty-year reign, the empire reached its greatest physical size. The Bijapur and Golconda Sultanates, which had been reduced to vassaldom by Shah Jahan, were formally annexed. But the empire also showed unmistakable signs of decline. The bureaucracy had grown corrupt; the huge army used outdated weaponry and tactics. Aurangzeb restored Mughal military dominance and expanded power southward, at least for a while. But Aurangzeb was involved in a series of protracted wars: against the sultans of Bijapur and Golkonda in the Deccan; the Rajputs of Rajasthan, Malwa, and Bundelkhand; the Marathas in Maharashtra; and the Ahoms in Assam. Peasant uprisings and revolts by local leaders became all too common, as did the conniving of the nobles to preserve their own status at the expense of a steadily weakening empire. From the early 1700s the campaigns of the Sikhs of Punjab—under leaders such as Banda Bahadur—inspired by the martial teachings of their last Guru: Guru Gobind Singh, who posed a considerable threat to Mughal rule in Northern India. Most decisively, the series of wars against the Pashtuns in Afghanistan weakened the very foundation upon which Moghul military might had rested. The Pashtuns formed the backbone of the Muhgal army and were some of the most hardened troops. The antagonism showed towards the erstwhile Mughal General Khushal Khan Khattak, for one, seriously undermined the Mughal military apparatus. The increasing association of Aurangzeb's government with Islam further drove a wedge between the ruler and his Hindu subjects. Aurangzeb's policies towards his Hindu subjects were harsh and intended to force them to convert. Temples were despoiled and the harsh "jiziya" tax (which non-Muslims had to pay) was re-introduced. In this climate, contenders for the Mughal throne were many, and the reigns of Aurangzeb's successors were short-lived and contended with strife. The Mughal Empire experienced dramatic reverses as regional nawabs or governors broke away and founded independent kingdoms, such as the Marathas in the south and the Sikhs in the north. In the war of 27 years from 1681 to 1707, the Mughals suffered several heavy defeats at the hands of the Marathas in the south. Additionally, in the early 1700s the Sikhs of the north became increasingly militant in an attempt to fight the oppressive Mughal rule, and they had to make peace with the Maratha armies. Furthermore, Persian and Afghan armies invaded Delhi, carrying away many treasures, including the Peacock Throne in 1739. Decline of the Ottoman Empire After a long decline since the 19th century, the Ottoman empire came to an end in the aftermath of its defeat in World War I, when it was dismantled by the Allies after the war ended in 1918. As the third of these three empires to fall, the Ottoman empire was the most successful in competing with the major imperial powers on their terms. The Ottoman empire did not finally collapse until the end of the First World Wars, having signed on as a co-belligerent with the Central Powers. One of the early events marking the beginning of the decline of the Ottoman Empire was the 1571 Battle of Lepanto. The Ottoman fleet lost this naval battle with the Holy League, an alliance of European states, in part because of the technological superiority of the Holy League fleet, specifically the European ships being propelled by sail rather than the oars on which the Ottoman ships depended. This battle foreshadowed a trend by which the Ottoman Empire would continue to stagnate in military and naval technology while the Western Powers would enjoy manifest advances in these areas, advances that would accelerate with the Industrial Revolution. The final high point of Ottoman power also was the turning point that would mark the beginning of measurable Ottoman decline, the 1683 siege of Vienna, the second Ottoman siege of this city. In the first Ottoman siege of Vienna in 1529 defenders of the city were able to outlast the siege and force an Ottoman withdrawal. This unsuccessful siege marked the farthest extent of Ottoman penetration into central Europe and the plateau of Ottoman imperial power. The Ottoman Empire was able to maintain its power and position on this plateau until the second siege of this city in 1683. It was part of the effort by Mahmud IV to expand Ottoman power into central Europe, in a war against the Austrian Empire and the Holy Roman Empire, among other European powers, in central Europe. This Ottoman war effort was in part a confrontation between these European powers and the Ottoman Empire, as well as a religious conflict and crusade by each side, as illustrated by the primary source in this lesson, the 1683 Ottoman declaration of war against the Austrian Empire. In this declaration Mahmud clearly states intentions in what he hopes will be an existential war. As in 1529 this second Ottoman siege in 1683 failed. A Polish force rescued Vienna and the Austrian Empire. This second failure by the Ottoman Empire in trying to capture Vienna reaffirmed the limits of Ottoman imperial power. Although not recognized at the time, this second unsuccessful Ottoman siege of Vienna marked apex of Ottoman expansion before its decline. Over the next two centuries European powers would continue to widen their technological superiority over the Ottoman Empire, among the other gunpowder empires, in warfare and industrialized manufacturing, along with expanding territorialy at the expense of a shrinking Ottoman Empire. Decline and Modernization Beginning in the late 18th century, the Ottoman Empire faced challenges defending itself against foreign invasion and occupation. In response to these threats, the empire initiated a period of tremendous internal reform that came to be known as the Tanzimat. This succeeded in significantly strengthening the Ottoman central state, despite the empire’s precarious international position. Over the course of the 19th century, the Ottoman state became increasingly powerful and rationalized, exercising a greater degree of influence over its population than in any previous era. The process of reform and modernization in the empire began with the declaration of the Nizam-ı Cedid (New Order) during the reign of Sultan Selim III (r. 1789 – 1807) and was punctuated by several reform decrees, such as the Hatt-ı Şerif of Gülhane in 1839 and the Hatt-ı Hümayun in 1856. By the end of this period in 1908, the Ottoman military was somewhat modernized and professionalized according to the model of Western European Armies. During the Tanzimat period, the government’s series of constitutional reforms led to a fairly modern conscripted army, banking system reforms, the decriminalization of homosexuality, and the replacement of religious law with secular law and guilds with modern factories. Defeat and Dissolution The defeat and dissolution of the Ottoman Empire (1908 – 1922) began with the Second Constitutional Era, a moment of hope and promise established with the Young Turk Revolution. It restored the Ottoman constitution of 1876 and brought in multi-party politics with a two-stage electoral system (electoral law) under the Ottoman parliament. The constitution offered hope by freeing the empire’s citizens to modernize the state’s institutions, rejuvenate its strength, and enable it to hold its own against outside powers. Its guarantee of liberties promised to dissolve inter-communal tensions and transform the empire into a more harmonious place. Instead, this period became the story of the twilight struggle of the Empire. The Second Constitutional Era began after the Young Turk Revolution (July 3, 1908) with the sultan’s announcement of the restoration of the 1876 constitution and the reconvening of the Ottoman Parliament. This era is dominated by the politics of the Committee of Union and Progress (CUP) and the movement that would become known as the Young Turks. Although it began as a uniting progressive party, the CUP splintered in 1911 with the founding of the opposition Freedom and Accord Party (Liberal Union or Entente), which poached many of the more liberal Deputies from the CUP. The remaining CUP members, who now took a more dominantly nationalist tone in the face of the enmity of the Balkan Wars, dueled Freedom and Accord in a series of power reversals that ultimately led to the CUP seizing power from the Freedom and Accord in the 1913 Ottoman coup d’état, which led to establishing total dominance over Ottoman politics until the end of World War I. The Young Turk government had signed a secret treaty with Germany and established the Ottoman-German Alliance in August 1914, aimed against the common Russian enemy but aligning the Empire with the German side. The Ottoman Empire entered World War I after the Goeben and Breslau incident, in which it gave safe harbor to two German ships that were fleeing British ships. These ships, officially transferred to the Ottoman Navy, but effectively still under German control, attacked the Russian port of Sevastopol, thus dragging the Empire into the war on the side of the Central Powers in the Middle Eastern theater. The Ottoman involvement World War I in the Middle East ended with the Arab Revolt in 1916. This revolt turned the tide against the Ottomans at the Middle Eastern front, where they initially seemed to have the upper hand during the first two years of the war. When the Armistice of Mudros was signed on October 30, 1918, the only parts of the Arabian peninsula still under Ottoman control were Yemen, Asir, the city of Medina, portions of northern Syria, and portions of northern Iraq. These territories were handed over to the British forces on January 23, 1919. The Ottomans were also forced to evacuate the parts of the former Russian Empire in the Caucasus (in present-day Georgia, Armenia, and Azerbaijan), which they had gained towards the end of World War I after Russia’s retreat from the war with the Russian Revolution in 1917. Under the terms of the Treaty of Sèvres, the partitioning of the Ottoman Empire was solidified. The new countries created from the former territories of the Ottoman Empire currently number 39. The occupations of Constantinople and Smyrna mobilized the Turkish national movement, which ultimately won the Turkish War of Independence. The formal abolition of the Ottoman Sultanate was performed by Grand National Assembly of Turkey on November 1, 1922. The Sultan was declared persona non grata and exiled from the lands that the Ottoman Dynasty ruled since 1299. Primary Source: Ottoman Sultan Mahmud IV The Great Turks Declaration of War This ominous statement accompanied the resurgence of war between the Ottoman Empire and Habsburg Austria. The sultan’s threat-laden declaration shows that religious and political questions were inseparable in the Turkish-Austrian rivalry. Ottoman Sultan Mahmud IV (1683), “The Great Turks Declaration of War against the Emperour of Germany (At his Pallace at Adrinople, February 20, 1683)” Mahomet Son of Emperours, Son to the famous and glorious God, Emperour of the Turks, King of Graecia, Macedonia, Samaria, and the Holy-land, King of Great and Lesser Egypt, King of all the Inhabitants of the Earth, and of the Earthly Paradise, Obedient Prince and Son of Mahomet, Preserver of the Towns of Hungaria, Possessour of the Sepulcher of your God, Lord of all the Emperours of the World, from the rising of the Sun to the going down thereof, King of all Kings, Lord of the Tree of Life, Conquerour of Melonjen, Itegly, and the City Prolenix, Great Pursuer of the Christians, Joy of the flourishing World, Commander and Guardian of the Crucified God, Lord of the Multitude of Heathens. We Command you to greet the Emperour Leopold (in case he desire it) and you are our Friends, and a Friend to our Majesty, whose Power we will extend very far.) Thus, You have for some time past acted to our prejudice, and violated our Frendship, although we have not offended you, neither by War, or any otherwise; but you have taken private advice with other Kings, and your Council’s how to take off your Yoke, in which you have acted very Indiscreetly, and thereby have exposed your People to fear and danger, having nothing to expect but Death, which you have brought upon your selves. For I declare unto you, I will make my self your Master, pursue you from East to West, and extend my Majesty to the end of the Earth; in all which you shall find my Power to your great prejudice. I assure you that you shall feel the weight of my Power; and for that you have put your hope and expectation in the strength of some Towns and Castles, I have given command to overthrow them, and to trample under feet with my Horses, all that is acceptable and pleasant in your Eyes, leaving nothing hereafter by which you shall make a friendship with me, or any fortified places to put your trust in: For I have resolved without retarding of time, to ruin both you and your People, to take the 2 German Empire according to my pleasure, and to leave in the Empire a Commemoration of my dreadful Sword, that it may appear to all, it will be a pleasure to me, to give a publick establishment of my Religion, and to pursue your Crucified God, whose Wrath I fear not, nor his coming to your Assistance, to deliver you out of my hands. I will according to my pleasure put your Sacred Priests to the Plough, and expose the Brests of your Matrons to be Suckt by Dogs and other Beasts. You will therefore do well to forsake your Religion, or else I will give Order to Consume you with Fire. This is enough said unto you, and to give you to understand what I would have, in case you have a mind to know it. From German History in Documents and Images Volume 2. From Absolutism to Napoleon, 1648-1815 Ottoman Sultan Mahmud IV’s Declaration of War on Emperor Leopold I, signed at Adrianople [Edirne] (February 20, 1683) Attributions Licenses and Attributions CC LICENSED CONTENT, SHARED PREVIOUSLY - Curation and Revision. Provided by: Boundless.com. License: CC BY-SA: Attribution-ShareAlike CC LICENSED CONTENT, SPECIFIC ATTRIBUTION - Title Image - 1788 painting of mufti sprinkling rose water on cannon at beginning of Ottoman military campaign. Attribution: de:Johann Hieronymus Löschenkohl, Public domain, via Wikimedia Commons. Provided by: Wikipedia. Location: https://upload.wikimedia.org/wikipedia/commons/3/37/Loeschenkohl03.jpg. License: CC BY-SA: Attribution-ShareAlike - https://courses.lumenlearning.com/boundless-worldhistory/chapter/the-ottoman-empirehttps://courses.lumenlearning.com/boundless-worldhistory/chapter/the-ottoman-empire - History of the Ottoman Empire. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/History_of_the_Ottoman_Empire. License: CC BY-SA: Attribution-ShareAlike - Decline and modernization of the Ottoman Empire. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Ottoman Empire. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Foreign relations of the Ottoman Empire. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Eastern Question. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - https://rachel.worldpossible.org/mods/en-olpc/wikislice-en/files/articles/Mughal_Empire.htm - "The text is available under the GNU Free Documentation License: http://www.gnu.org/copyleft/fdl.html".
oercommons
2025-03-18T00:35:07.583663
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87819/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87856/overview
Early Africa Overview Early Africa Civilizations emerged in Africa centuries prior to the arrival of Islam into the continent (7th CE) and later European explorers and traders (15th century). These complex cultures developed with very little contact with civilizations ourside of the continent. Learning Objectives - Identify the geographical features of Africa that influenced the development of cultures in this region. - Describe the earliest civilizations to develop in Africa south of the Sahara Desert. Key Terms / Key Concepts Sahara Desert: separates North Africa from the rest of the continent of Africa (Sub-Saharan Africa) along the coast of the Mediterranean Sea Savannah: tropical grasslands just to the south of the Sahara Nok Culture: arose in central Africa in modern Nigeria as an advanced culture around 400 BCE Aksum: large empire around 300 CE located in Northeast Africa in modern Ethiopia, as well as North and South Sudan Early Africa Advanced cultures developed deep within the interior of Africa despite relatively very little interaction with cultures outside of this region. The vast Sahara Desert separates North Africa, along the coast of the Mediterranean Sea, from the rest of the continent of Africa (Sub-Saharan Africa). This was an isolated area, where cultures evolved practically devoid of contact with other regions prior to the arrival of Islam in North Africa, which began along the Mediterranean Sea in the seventh century CE. Nonetheless, flourishing cultures developed in the Savannah, tropical grasslands just to the south of the Sahara, and in the rain forests of central Africa. People in the Savannah had domesticated millet by 6000 BCE, whereas the inhabitants of the rain forests had domesticated yams (sweet potatoes) by 2000 BCE. By 400 BCE, the Nok culture of central Africa in modern Nigeria shows signs of an advanced culture with its mastery of iron technology and its elaborate terracotta figures that evidence advanced craft production. The first, well-known great empire in Sub-Saharan Africa was Aksum (or Axum) located in Northeast Africa—which is modern Ethiopia, as well as North and South Sudan. After 300 CE the kings of Aksum controlled the Red Sea coast and could, therefore, control the Indian Ocean trade between the Late Roman Empire and the Gupta Empire in India. The wealth from this trade enabled these kings to control its vast empire. In the fourth century CE, the kings of Aksum converted to Christianity. This Ethiopian Church became a Monophysite Church like the Coptic Church in Egypt, which was independent from the Roman Catholic and Orthodox churches in Europe. The Church in Aksum adapted the Coptic script to develop its own script, Ge'ez, so that the Holy Scriptures were available in their own language. After the rise of Islam, the Aksum Empire collapsed, but the Ethiopian Church survived in the mountains of Ethiopia. From the remnants of the Aksum Empire, the kingdom of Abyssinia emerged in the 11th century CE, which is the forerunner of the modern nation of Ethiopia.
oercommons
2025-03-18T00:35:07.602475
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87856/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87857/overview
Islamic African Empires Overview Islamic African Empires The arrival of Islam on the continent of Africa in the seventh century CE resulted in the development of a series of Islamic empires across sub-Saharan Africa. Learning Objectives Assess how Trans-Atlantic trade affected the social and political development in Africa and the Americas. Key Terms / Key Concepts Soninke people: the founders of the ancient empire of Ghana c. 750–1240 CE Almoravids: a Berber imperial dynasty of Morocco that formed an empire in the 11th century that stretched over the western Maghreb and Al-Andalus (Spain) Mansa: a Mandinka word meaning “sultan” (king) or “emperor,” particularly associated with the Keita dynasty of the Mali Empire, which dominated West Africa from the 13th century to the 15th century Sahel: the ecoclimatic and biogeographic zone of transition in Africa between the Sahara to the north and the Sudanian Savanna to the south (Having a semi-arid climate, it stretches across the south-central latitudes of Northern Africa between the Atlantic Ocean and the Red Sea.) Bantu expansion: a postulated millennia-long series of migrations of original proto-Bantu language speakers ( Attempts to trace the exact route of the migrations, to correlate it with archaeological evidence and genetic evidence, have not been conclusive. The Bantu traveled in two waves, and it is likely that the migration of the Bantu-speaking people from their core region in West Africa began around 1000 BCE.) Shona: a group of Bantu people in Zimbabwe and some neighboring countries, the main part of which is divided into five major clans and adjacent to some people of very similar culture and languages; the peoples who created empires and states on the Zimbabwe plateau Islamic African Empires Control of trade routes across the Sahara resulted in a series of empires in western Africa. These empires used the revenues from this trade to build their armies and bureaucracies. The first of these empires was Ghana (990 – 1180 CE). The rulers of this empire converted to Islam and even imported Arab clerics as administrators, but their subjects still practiced their native polytheistic religions. Another such great empire was Mali. Under its greatest king, Mansa Musa, the capital of this empire, Timbuktu, became a center of Islamic scholarship and the home of a university. In 1324, Mansa Musa travelled to the Islamic holy city of Mecca on a pilgrimage and astounded fellow Muslim pilgrims with the vast quantities of gold that he distributed as gifts. In the 15th century CE with the decline of Mali, the empire of Songhai arose with its capital at the trading city of Gao. Islam and trade also resulted in the emergence of Swahili trading cities along the east coast of Africa, along the Indian Ocean. In the first century CE, Bantu peoples from Central Africa began migrating to East Africa and South Africa. Bantu peoples shared a common language, iron technology, and an economy focused on cattle raising. With the rise of Islam, Arab merchants sailed down the coast of East Africa and established trading posts. The mixture of Bantu and Arab culture in these trading posts resulted in a distinctive Swahili culture. The trading posts developed into large city-states by 1200, which were ruled by "Sultans”. The growth of these Swahili city-states sparked the emergence of an advanced culture in South Africa—in modern Zimbabwe and Zambia. Swahili merchants along the east coast traded with the Bantu peoples of the interior for cattle hides, salt, and gold. Between 900 and 1500 CE, the rulers of the Shona established powerful kingdoms in this region that drew revenue from this trade. Shona kings built massive stone palaces (Zimbabwe) whose ruins still impress visitors today. The Shona did not convert to Islam. They instead continued to practice their polytheistic religion and revere their kings as gods. The Ghana Empire The Ghana Empire, called the Wagadou (or Wagadu) Empire by its rulers, was located in what is now southeastern Mauritania, western Mali, and eastern Senegal. It derived its power from the control of trans-Saharan trade, particularly the gold trade. There is no consensus on when precisely it originated, but its development is linked to the changes in trade that emerged after the introduction of the camel to the western Sahara. By the time of the Muslim conquest of North Africa in the 7th century, the camel had changed the earlier, more irregular trade routes into a trade network running from Morocco to the Niger River. This regular and intensified trans-Saharan trade in gold, salt, and ivory allowed for the development of larger urban centers and encouraged territorial expansion to gain control over different trade routes. The Ghana ruling dynasty was first mentioned in written records in 830, and thus the 9th century is sometimes identified as the empire’s beginning. In the medieval Arabic sources, the word “Ghana” can refer to a royal title, the name of a capital city, or a kingdom. From the 9th century, Arab authors mention the Ghana Empire in connection with the trans-Saharan gold trade. Al-Bakri, who wrote in the 11th century, described the capital of Ghana as consisting of two towns six miles apart, one inhabited by Muslim merchants and the other by the king of Ghana. According to the tradition of the Soninke people, they migrated to southeastern Mauritania in the 1st century, and as early as around 100 CE created a settlement that would eventually develop into the Ghana Empire. Other sources identify the beginnings of the empire sometime between the 4th century and the mid-8th century. Most information about the economy of Ghana comes from al-Bakri. He noted that merchants had to pay a one gold dinar tax on imports of salt and two on exports of salt. Al-Bakri mentioned also copper and “other goods.” Imports probably included products such as textiles and ornaments. Many of the hand-crafted leather goods found in old Morocco originated from the Ghana Empire. Tribute was also received from various tributary states and chiefdoms at the empire’s periphery. The Ghana Empire lay in the Sahel region to the north of the West African gold fields and was able to profit from controlling the trans-Saharan gold trade. The early history of Ghana is unknown, but there is evidence that North Africa had begun importing gold from West Africa before the Arab conquest in the middle of the 7th century. Much testimony on ancient Ghana comes from the recorded visits of foreign travelers, who could provide only a fragmentary picture. Islamic writers often commented on the social-political stability of the Empire based on the seemingly just actions and grandeur of the king. Al-Bakri questioned merchants who visited the empire in the 11th century and wrote of the king hearing grievances against officials and being surrounded by great wealth. Ghana appears to have had a central core region and was surrounded by vassal states. One of the earliest sources, al-Ya’qubi, writing in 889/890 (276 AH), noted that “under the king’s authority are a number of kings.” These “kings” were presumably the rulers of the territorial units often called kafu in Mandinka. In al-Bakri’s time, the rulers of Ghana had begun to incorporate more Muslims into government, including the treasurer, his interpreter, and “the majority of his officials.” Given scarce Arabic sources and the ambiguity of the existing archaeological record, it is difficult to determine when and how Ghana declined and fell. According to Arab tradition, Ghana fell when it was sacked by the Almoravid movement in 1076 – 1077, but this interpretation has been questioned. Some historians have argued that the notion of any Almoravid military conquest is merely perpetuated folklore, derived from a misinterpretation of or limited reliance on Arabic sources. Other historians have maintained that Almoravid political agitation somehow contributed to Ghana’s demise. While the evidence for conquest is unclear, the influence and success of the Almoravid movement in securing West African gold and circulating it widely necessitated a high degree of political control. The archaeology of ancient Ghana, however, does not show signs of the rapid change and destruction that would be associated with any Almoravid-era military conquests. Historians assume that this ensuing war pushed Ghana over the edge, ending the kingdom’s position as a commercial and military power by 1100. It collapsed into tribal groups and chieftaincies, some of which later assimilated into the Almoravids, while others founded the Mali Empire. Despite ambiguous evidence, it is clear that Ghana was incorporated into the Mali Empire around 1240. Mali The Mali Empire was an empire in West Africa that lasted from 1230 to 1600 and profoundly influenced the culture of the region through the spread of its language, laws, and customs along lands adjacent to the Niger River, as well as other areas consisting of numerous vassal kingdoms and provinces. The Mali Empire, also historically referred to as the Manden Kurufaba, was founded by Sundiata Keita and became renowned for the wealth of its rulers. Modern oral traditions recorded that the Mandinka kingdoms of Mali or Manden had already existed several centuries before their unification as small states just to the south of the Soninké empire of Wagadou (the Ghana Empire). This area was composed of mountains, savanna, and forest providing ideal protection and resources for the population of hunters. Those not living in the mountains formed small city-states, such as Toron, Ka-Ba, and Niani. In approximately 1140, the Sosso kingdom of Kaniaga, a former vassal of Wagadou, began conquering the lands of its old masters. By 1180, it had even subjugated Wagadou, forcing the Soninké to pay tribute. In 1203, the Sosso king Soumaoro of the Kanté clan came to power and reportedly terrorized much of Manden. After many years in exile, first at the court of Wagadou and then at Mema, Sundiata, a prince who eventually became founder of the Mali Empire, was sought out by a Niani delegation and begged to combat the Sosso and free the kingdoms of Manden. Returning with the combined armies of Mema, Wagadou, and all the rebellious Mandinka city-states, Maghan Sundiata, or Sumanguru, led a revolt against the Kaniaga Kingdom around 1234. The combined forces of northern and southern Manden defeated the Sosso army at the Battle of Kirina (then known as Krina) in approximately 1235. This victory resulted in the fall of the Kaniaga kingdom and the rise of the Mali Empire. After the victory, King Soumaoro disappeared and the Mandinka stormed the last of the Sosso cities. Maghan Sundiata received the title “mansa,” which translates roughly to emperor. At the age of eighteen, he gained authority over all the twelve kingdoms in an alliance known as the Manden Kurufaba. He was crowned under the throne name Sunidata Keita, becoming the first Mandinka emperor. And so the name Keita became a clan/family and began its reign. The Mali Empire covered a larger area for a longer period than any other West African state before or since. What made this possible was the decentralized nature of administration throughout the state and that the mansa managed to keep tax money and nominal control over the area without agitating his subjects into revolt. Officials at the village, town, city, and county levels were elected locally, and only at the state or provincial level was there any palpable interference from the central authority in Niani. Provinces picked their own governors via their own customs, but governors had to be approved by the mansa and were subject to his oversight. The Mali Empire flourished because of trade above all else. It contained three immense gold mines within its borders, and the empire taxed every ounce of gold or salt that entered its borders. By the beginning of the 14th century, Mali was the source of almost half the Old World’s gold, exported from mines in Bambuk, Boure, and Galam. There was no standard currency throughout the realm, but several forms were prominent by region. The towns of the Mali Empire were organized as both staging posts in the long-distance caravan trade and trading centers for the various West African products (e.g., salt, copper). Ibn Battuta, a Moroccan Muslim traveler and scholar, observed the employment of slave labor. During most of his journey, Ibn Battuta traveled with a retinue that included slaves, most of whom carried goods for trade but would also be traded themselves. On the return to Morocco, his caravan transported 600 female slaves, which suggests that slavery was a substantial part of the commercial activity of the empire. Thanks to steady tax revenue and a stable government beginning in the last quarter of the 13th century, the Mali Empire was able to project its power throughout its own extensive domain and beyond. The empire maintained a semi-professional full-time army in order to defend its borders. The entire nation was mobilized, with each clan obligated to provide a quota of fighting-age men. Historians who lived during the height and decline of the Mali Empire consistently recorded its army at 100,000, with 10,000 of that number being made up of cavalry. The Mali Empire reached its largest size under the Laye Keita mansas (1312 – 1389). The empire’s total area included nearly all the land between the Sahara Desert and the coastal forests. It spanned modern-day Senegal, southern Mauritania, Mali, northern Burkina Faso, western Niger, the Gambia, Guinea-Bissau, Guinea, the Ivory Coast, and northern Ghana. The first ruler from the Laye lineage was Kankan Musa Keita (or Moussa), also known as Mansa Musa. He embarked on a large building program, raising mosques and madrasas (Muslim schools) in Timbuktu and Gao. He also transformed Sankore from an informal madrasah into an Islamic university. By the end of Mansa Musa’s reign, the Sankoré University had been converted into a fully staffed university, with a large collection of books. During this period, there was an advanced level of urban living in the major centers of the Mali. Sergio Domian, an Italian art and architecture scholar, wrote the following about this period: “Thus was laid the foundation of an urban civilization. At the height of its power, Mali had at least 400 cities, and the interior of the Niger Delta was very densely populated.” Mansa Mahmud Keita IV was the last emperor of Manden. He launched an unsuccessful attack on the city of Djenné in 1599. The battle marked the effective end of the great Mali Empire and set the stage for a large number of smaller West African states to emerge. Around 1610, Mahmud Keita IV died. Oral tradition states that he had three sons who fought over Manden’s remains. No single Keita ever ruled Manden after Mahmud Keita IV’s death, thus his death marked the end of the Mali Empire. Songhai The Songhai Empire (also transliterated as Songhay) was a state that dominated the western Sahel in the 15th and 16th centuries. At its peak, it was one of the largest states in African history. The state is known by its historiographical name, derived from its leading ethnic group and ruling elite, the Songhai. Sonni Ali established Gao as the capital of the empire, although a Songhai state had existed in and around Gao since the 11th century. Other important cities in the empire were Timbuktu and Djenné, conquered in 1468 and 1475 respectively, where urban-centered trade flourished. Initially, the empire was ruled by the Sonni dynasty (c. 1464 – 1493), but it was later replaced by the Askiya dynasty (1493 – 1591). During the second half of the 13th century, Gao and the surrounding region had grown into an important trading center and attracted the interest of the expanding Mali Empire. Mali conquered Gao towards the end of the 13th century and the town would remain under Malian hegemony until the late 14th century. But as the Mali Empire started to disintegrate, the Songhai reasserted control of Gao. Songhai rulers subsequently took advantage of the weakened Mali Empire to expand Songhai rule. In the second half of the 14th century, disputes over succession weakened the Mali Empire and in the 1430s, Songhai, previously a Mali dependency, gained independence under the Sonni Dynasty. Around thirty years later, Sonni Sulayman Dama attacked Mema, the Mali province west of Timbuktu, paving the way for his successor, Sonni Ali, to turn his country into one of the greatest empires sub-Saharan Africa has ever seen. Sonni Ali reigned from 1464 to 1492. Like Songhai kings before him, he was a Muslim. In the late 1460s, he conquered many of the Songhai’s neighboring states, including what remained of the Mali Empire. He was arguably the empire’s most formidable military strategist and conqueror. Under his rule, Songhai reached a size of over 1,400,000 square kilometers. During his campaigns for expansion, Ali conquered many lands, repelling attacks from the Mossi to the south and overcoming the Dogon people to the north. He annexed Timbuktu in 1468, after Islamic leaders of the town requested his assistance in overthrowing marauding Tuaregs (Berber people with a traditionally nomadic pastoralist lifestyle) who had taken the city following the decline of Mali. However, Ali met stark resistance after setting his sights on the wealthy and renowned trading town of Djenné (also known as Jenne). After a persistent seven-year siege, he was able to forcefully incorporate it into his vast empire in 1473, but only after having starved its citizens into surrender Oral traditions present a conflicted image of Sonni Ali. On the one hand, the invasion of Timbuktu destroyed the city, and Ali was described as an intolerant tyrant who conducted a repressive policy against the scholars of Timbuktu, especially those of the Sankore region who were associated with the Tuareg. On the other hand, his control of critical trade routes and cities brought great wealth. He is thus often presented as a powerful politician and great military commander and under his reign, Djenné and Timbuktu became great centers of learning. Following Ali’s reign, Askia the Great strengthened the Songhai Empire and made it the largest empire in West Africa’s history. At its peak under his reign, the Songhai Empire encompassed the Hausa states as far as Kano (in present-day Nigeria) and much of the territory that had belonged to the Songhai empire in the west. His policies resulted in a rapid expansion of trade with Europe and Asia, the creation of many schools, and the establishment of Islam as an integral part of the empire. Askia opened religious schools, constructed mosques, and extended his court to scholars and poets from throughout the Muslim world. He was also tolerant of other religions and did not force Islam on his people. Among his great accomplishments was an interest in astronomical knowledge, which led to the development of astronomy and observatories in the capital. Not only was Askia a patron of Islam but he was also gifted in administration and encouraging trade. He centralized the administration of the empire and established an efficient bureaucracy that was responsible for, among other things, tax collection and the administration of justice. He also demanded that canals be built in order to enhance agriculture, which would eventually increase trade. More importantly than anything Askia did for trade was the introduction of weights and measures and the appointment of an inspector for each of Songhai’s important trading centers. During his reign, Islam became more widely entrenched, trans-Saharan trade flourished, and Saharan salt mines were brought within the boundaries of the empire. However, as Askia the Great grew older, his power declined. In 1528, his sons revolted against him and declared Musa, one of Askia’s many sons, as king. Following Musa’s overthrow in 1531, Songhai’s empire went into decline. Multiple attempts at governing the empire by Askia’s sons and grandsons failed and between the political chaos and multiple civil wars within the empire Ahmed al-Mansur—the Sultan of Morocco—invaded Songhai. The main reason for the Moroccan invasion of Songhai was to seize control and revive the trans-Saharan trade in salt and gold. The Songhai military, during Askia’s reign, consisted of full-time soldiers, but the king never modernized his army. The Empire fell to the Moroccans and their firearms in 1591. Before the collapse of the Songhai Empire in the 16th century, the Songhai city of Timbuktu at its peak was a thriving cultural and commercial center where Arab, Italian, and Jewish merchants all gathered for trade. Trade existed throughout the empire due to the Songhai standing army stationed in the provinces. Central to this trade were the independent gold fields of this empire since gold was one of the empire’s major exports. The Julla (merchants) would form partnerships, and the state would protect these merchants and the port cities of the Niger. The Songhai economy was based on a clan system. The clan a person belonged to ultimately decided one’s occupation. The most common were metalworkers, fishermen, and carpenters. Lower caste participants consisted of mostly non-farm working immigrants, who at times were provided special privileges and held high positions in society. At the top were noblemen and direct descendants of the original Songhai people, followed by freemen and traders. At the bottom were war captives and European slaves obligated to labor, especially in farming. Criminal justice in Songhai was based mainly, if not entirely, on Islamic principles, especially during the rule of Askia the Great. Upper classes in society converted to Islam while lower classes often continued to follow traditional religions. Sermons emphasized obedience to the king. Sonni Ali established a system of government under the royal court, later to be expanded by Askia, which appointed governors to preside over local tributary states situated around the Niger valley. Local chiefs were still granted authority over their respective domains as long as they did not undermine Songhai policy. Tax was imposed onto peripheral chiefdoms and provinces to ensure the dominance of Songhai, and in return these provinces were given almost complete autonomy. Songhai rulers only intervened in the affairs of these neighboring states when a situation became volatile, which was usually an isolated incident. Each town was represented by government officials, holding positions and bureaucratic responsibilities. The Kanem Empire At its height, the Kanem Empire (c. 700 – 1376) encompassed an area covering Chad, parts of southern Libya and eastern Niger, northeastern Nigeria, and northern Cameroon. The history of the empire is mainly known from the Royal Chronicle, or Girgam, discovered in 1851 by the German traveler Heinrich Barth. The empire of Kanem began forming around 300 CE under the nomadic Tebu-speaking Kanembu. The Kanembu eventually abandoned their nomadic lifestyle and founded a capital around 700 CE under the first documented Kanembu king (mai), known as Sef of Saif. The capital of Njimi grew in power and influence under Sef’s son, Dugu. This transition marked the beginning of the Duguwa dynasty. The mais of the Duguwa were regarded as divine kings and belonged to the ruling establishment known as the magumi. Despite changes in dynastic power, the magumi and the title of mai would persevere for over a thousand years. The major factor that later influenced the history of the state of Kanem was the early penetration of Islam that came with North African traders: Berbers and Arabs. In 1085, a Muslim noble by the name of Hummay removed the last Duguwa king, Selma, from power and thus established the new dynasty of the Sefuwa. The introduction of the Sefuwa dynasty meant radical changes for the Kanem Empire. First, it meant the adoption of Islam and Muslim religious practices by the court and in state policies. Second, the identification of founders had to be revised. Islam offered the Sayfawa rulers the advantage of new ideas from Arabia and the Mediterranean world, as well as literacy, in the form of the Arabic language. But many people resisted the new religion, favoring traditional beliefs and practices. Kanem’s expansion peaked during the long and energetic reign of Mai Dunama Dabbalemi (ca. 1221 – 1259), also of the Sayfawa dynasty. Dabbalemi initiated diplomatic exchanges with sultans in North Africa and apparently arranged for the establishment of a special hostel in Cairo to facilitate pilgrimages to Mecca. During his reign, he declared jihad or “holy war” against the surrounding tribes and initiated an extended period of conquest. However, he also destroyed the local Mune cult, which was centered around a mysterious sacred object that was revered by the people. This action sparked widespread revolt, resulting in the uprising of the Tubu and the Bulala. The former was quenched, but the latter lingered on, finally leading to the retreat of the Sayfuwa from Kanem to Bornu c. 1380. By the end of the 14th century, internal struggles and external attacks had torn Kanem apart. Between 1359 and 1383, seven mais reigned, but Bulala invaders (from the area around Lake Fitri to the east) killed five of them. This proliferation of mais resulted in numerous claimants to the throne and a series of destructive wars. Finally, around 1380, the Bulala forced Mai Umar Idrismi to abandon Njimi and move the Kanembu people to Bornu on the western edge of Lake Chad. Over time, the intermarriage of the Kanembu and Bornu peoples created a new people and language, the Kanuri. Even in Bornu, the Sayfawa dynasty’s troubles persisted. During the first three-quarters of the 15th century, for example, fifteen mais occupied the throne. Around 1460, Mai Ali Dunamami defeated his rivals and began the consolidation of Bornu. He built a fortified capital at Ngazargamu, to the west of Lake Chad (in present-day Niger), the first permanent home a Sayfawa mai had enjoyed in a century. The Sayfawa rejuvenation was so successful that by the early 16th century, Mai Idris Katakarmabe (1487 – 1509) was able to defeat the Bulala and retake Njimi, the former capital. The empire’s leaders, however, remained at Ngazargamu because its lands were more agriculturally productive and better suited to the raising of cattle. With control over both capitals, the Sayfawa dynasty became more powerful than ever. The two states were merged, but political authority still rested in Bornu. Kanem-Bornu peaked during the reign of the statesman Mai Idris Alwma (also spelled Alooma or Alawma) in the last decades of the 16th/the beginning of the 17th century. Alwma introduced a number of legal and administrative reforms based on his religious beliefs and Islamic law (sharia). He sponsored the construction of numerous mosques and made a pilgrimage to Mecca, where he arranged for the establishment of a hostel to be used by pilgrims from his empire. Alwma’s reformist goals led him to seek loyal and competent advisers and allies, and he frequently relied on slaves who had been educated in noble homes. He required major political figures to live at the court, and he reinforced political alliances through appropriate marriages. Kanem-Bornu under Alwma was strong and wealthy. Government revenue came from tribute (or booty, if the recalcitrant people had to be conquered), sales of slaves, and duties on and participation in trans-Saharan trade. Unlike West Africa, the Chadian region did not have gold. Still, it was central to one of the most convenient trans-Saharan routes. Between Lake Chad and Fezzan lay a sequence of well-spaced wells and oases, and from Fezzan in Libya there were easy connections to North Africa and the Mediterranean Sea. Many products were sent north, including natron (sodium carbonate), cotton, kola nuts, ivory, ostrich feathers, perfume, wax, and hides. However, the most significant export of all were slaves. Imports included salt, horses, silks, glass, muskets, and copper. Somali Sultanates and Islam After the arrival of Islam in East Africa in the seventh century, the territory of modern Somalia witnessed the emergence and decline of several powerful sultanates that dominated the regional trade. At no point was the region centralized as one state, and the development of all the sultanates was linked to the central role that Islam played in the area. The oldest mosque in the city of Zeila, a major port/trading center, dates to the 7th century. In the late 9th century, Muslims were living along the northern Somali seaboard, and evidence suggests that Zeila was already the headquarters of a Muslim sultanate in the 9th or 10th century. This state was governed by local dynasties consisting of Somalized Arabs or Arabized Somalis, who also ruled over the Sultanate of Mogadishu in the Benadir region to the south. The Sultanate of Mogadishu was an important trading empire that lasted from the 10th century to the 16th century. It rose as one of the pre-eminent powers in the Horn of Africa over the course of the 12th to 14th centuries, before becoming part of the expanding Ajuran Empire. The Mogadishu Sultanate maintained a vast trading network, dominated the regional gold trade, minted its own Mogadishu currency, and left an extensive architectural legacy in present-day southern Somalia. Its first dynasty was established by Sultan Fakr ad-Din. This ruling house was succeeded by the Muzaffar dynasty, and the kingdom subsequently became closely linked with the Ajuran Sultanate. For many years, Mogadishu stood as the pre-eminent city in what is known as the Land of the Berbers, which was the medieval Arab term for the Somali coast. Contemporary historians suggest that the Berbers were ancestors of the modern Somalis. During his travels, Ibn Sa’id al-Maghribi (1213 – 1286) noted that the city had already become the leading Islamic center in the region. By the time of the Moroccan traveler Ibn Battuta’s appearance on the Somali coast in 1331, the city was at the zenith of its prosperity. He described Mogadishu as “an exceedingly large city” with many rich merchants that was famous for its high-quality fabric that it exported to Egypt, among other places. The Ajuran Sultanate ruled over large parts of the Horn of Africa between the 13th and late 17th centuries. Through a strong centralized administration and an aggressive military stance toward invaders, it successfully resisted, from the west, an Oromo invasion (a series of expansions in the 16th and 17th centuries by the Oromo people from parts of Kenya and Somalia to Ethiopia) and a Portuguese incursion from the east during the Gaal Madow and the Ajuran-Portuguese wars. Trading routes dating from the ancient and early medieval periods of Somali maritime enterprise were strengthened or re-established, and foreign trade and commerce in the coastal provinces flourished, with ships sailing to and coming from many kingdoms and empires in East Asia, South Asia, Europe, the Near East, North Africa, and East Africa. The Ajuran Sultanate left an extensive architectural legacy, being one of the major medieval Somali powers engaged in castle and fortress building. Many of the ruined fortifications dotting the landscapes of southern Somalia today are attributed to the Ajuran Sultanate’s engineers. During the Ajuran period, many regions and people in the southern part of the Horn of Africa converted to Islam because of the influence of Ajuran Islamic government. The royal family, the House of Garen, expanded its territories and established its hegemonic rule through a skillful combination of warfare, trade linkages, and alliances. As a hydraulic empire, the Ajuran monopolized the water resources of the Shebelle and Jubba rivers. It also constructed many of the limestone wells and cisterns of the state that are still in use today. The rulers developed new systems for agriculture and taxation, which continued to be used in parts of the Horn of Africa as late as the 19th century. The tyrannical rule of the later Ajuran rulers caused multiple rebellions to break out in the sultanate, and at the end of the 17th century the Ajuran state disintegrated into several successor kingdoms and states. The Warsangali Sultanate was a kingdom centered in northeastern and in some parts of southeastern Somalia. It was one of the largest sultanates ever established in the territory, and, at the height of its power, included the Sanaag region and parts of the northeastern Bari region of the country, an area historically known as Maakhir or the Maakhir Coast. The Sultanate was founded in the late 13th century in northern Somalia by a group of Somalis from the Warsangali branch of the Darod clan. The Sultanate of Ifat was a medieval Muslim Sultanate in the Horn of Africa. Led by the Walashma dynasty, it was centered in the ancient cities of Zeila and Shewa. The sultanate ruled over parts of what are now eastern Ethiopia, Djibouti, and northern Somalia. Ifat first emerged in the 13th century, when Sultan Umar Walashma (or his son Ali, according to another source) is recorded as having conquered the Sultanate of Showa in 1285. Sultan Umar’s military action was an effort to consolidate the Muslim territories in the Horn of Africa in much the same way as the Abyssinian Emperor Yekuno Amlak was attempting to consolidate the Christian territories in the highlands during the same period. These two states inevitably came into conflict over Shewa and territories further south. A lengthy war ensued, but the Muslim sultanates of the time were not strongly unified. Ifat was finally defeated by Emperor Amda Seyon I of Abyssinia in 1332. Despite this setback, the Muslim rulers of Ifat continued their campaign. The Ethiopian emperor branded the Muslims of the surrounding area “enemies of the Lord” and invaded Ifat in the early 15th century. After much struggle, Ifat’s troops were defeated. Ifat eventually disappeared as a distinct polity following the conquest by Abyssinia led by Ahmad ibn Ibrahim al-Ghazi and the subsequent Oromo migrations into the area. Its name is preserved in the modern-day Ethiopian district of Yifat, situated in Shewa. The Adal Sultanate or Kingdom of Adal was founded after the fall of the Sultanate of Ifat. It flourished from around 1415 to 1577. The sultanate was established predominately by local Somali tribes, as well as Afars, Arabs, and Hararis. At its height, the polity controlled large parts of Somalia, Ethiopia, Djibouti, and Eritrea. During its existence, Adal had relations and engaged in trade with other polities in northeast Africa, the Near East, Europe, and South Asia. Many of the historic cities in the Horn of Africa, such as Abasa and Berbera, flourished under its reign, with courtyard houses, mosques, shrines, walled enclosures, and cisterns. Adal attained its peak in the 14th century, trading in slaves, ivory, and other commodities with Abyssinia and kingdoms in Arabia through its chief port of Zeila. Bantu and Swahili Culture Swahili culture is the culture of the people inhabiting the Swahili Coast, encompassing today’s Tanzania, Kenya, Uganda, and Mozambique, as well as the adjacent islands of Zanzibar and Comoros and some parts of the Democratic Republic of the Congo and Malawi. They speak Swahili as their native language, which belongs to the Niger-Congo family. Swahili culture is the product of the history of the coastal part of the African Great Lakes region. As with the Swahili language, Swahili culture has a Bantu core, with some foreign influences. Around 3,000 years ago, speakers of the proto-Bantu language group began a millennia-long series of migrations eastward from their homeland between West Africa and Central Africa, at the border of eastern Nigeria and Cameroon. This Bantu expansion first introduced Bantu peoples to central, southern, and southeastern Africa—regions from which they had previously been absent. The Swahili people are mainly united under the mother tongue of Kiswahili, a Bantu language. This also extends to Arab, Persian, and other migrants who reached the coast around the 7th and 8th centuries, providing considerable cultural infusion and numerous loan words from Arabic and Persian. Bantu settlements straddled the Southeast African coast as early as the beginning of the 1st millennium. They evolved gradually from the 7th century onward to accommodate for an increase in trade (mainly with Arab merchants), population growth, and further centralized urbanization. These developed into what would later become known as the Swahili city-states. European archaeologists once assumed during the 19th century that Arab or Persian colonizers brought stone architecture and urban civilization to the Swahili Coast. Today we know that it was local populations who developed the Swahili coast. Swahili architecture exhibits a range of influences and innovations, and diverse forms and histories interlock and overlap to create densely layered structures that cannot be broken down into distinct stylistic parts. Swahili City-States Around the 8th century, the Bantu people began trading with the Arab, Persian, Indian, Chinese, and Southeast Asian peoples—a process known as the Indian Ocean trade. As a consequence of long-distance trading routes crossing the Indian Ocean, the emerging Swahili culture was influenced by Arabic, Persian, Indian, and Chinese cultures. During the 10th century, several city-states flourished along the Swahili Coast and adjacent islands, including Kilwa, Malindi, Gedi, Pate, Comoros, and Zanzibar. These early Swahili city-states were Muslim, cosmopolitan, and politically independent of one another. They grew in wealth because the Bantu Swahili people served as intermediaries and facilitators to local, Arab, Persian, Indonesian, Malaysian, Indian, and Chinese merchants. They all competed against one another for the best of the Great Lakes region’s (modern Uganda, Rwanda) trade business, and their chief exports were salt, ebony, gold, ivory, and sandalwood. They were also involved in the slave trade. These city-states began to decline towards the 16th century, mainly as a consequence of the arrival of the Portuguese. Eventually, Swahili trading centers went out of business, and commerce between Africa and Asia on the Indian Ocean collapsed. The Kilwa Sultanate was one of the more prominent of these sultanates, centered at Kilwa (an island off modern-day Tanzania). At its height, the Kilwa Sultanate’s authority stretched over the entire length of the Swahili Coast. It was founded in the 10th century by Ali ibn al-Hassan Shirazi, a Persian prince of Shiraz. His family ruled the Sultanate until 1277, when it was replaced by the Arab family of Abu Moaheb. The latter was overthrown by a Portuguese invasion in 1505. By 1513, the sultanate was already fragmented into smaller states, many of which became protectorates of the Sultanate of Oman. Despite its origin as a Persian colony, extensive inter-marriage and conversion of local Bantu inhabitants, and later Arab immigration, turned the Kilwa Sultanate into a diverse state not ethnically differentiable from the mainland. It was the mixture of Perso-Arab and Bantu cultures in Kilwa that is credited for creating Swahili as a distinctive East African culture and language. Nonetheless, the Muslims of Kilwa (whatever their ethnicity) would often refer to themselves generally as Shirazi or Arabs, and to the unconverted Bantu peoples of the mainland as Zanj or Khaffirs (infidels). The Kilwa Sultanate was almost wholly dependent on external commerce. Effectively, it was a confederation of urban settlements, and there was little to no agriculture carried on in within the boundaries of the sultanate. Grains (principally millet and rice), meats (cattle and poultry), and other supplies necessary to feed the large city populations had to be purchased from the Bantu peoples of the interior. Kilwan traders from the coast encouraged the development of market towns in the Bantu-dominated highlands of what are now Kenya, Tanzania, Mozambique, and Zimbabwe. The Kilwan mode of living was as middlemen traders, importing manufactured goods (e.g. cloth) from Arabia and India—which were then swapped in the highland market towns for Bantu-produced agricultural commodities (grain, meats)—and precious raw materials (gold, ivory) that they would export back to Asia. The diverse history of the Swahili Coast has also resulted in multicultural influences on Swahili arts, including furniture and architecture. The Swahili do not often use designs with images of living beings due to their Muslim heritage. Instead, Swahili designs are primarily geometric. The most typical musical genre of Swahili culture is taarab (or tarabu), sung in the Swahili language. Its melodies and orchestration have Arab and Indian influences. Swahili architecture, a term used to designate a whole range of diverse building traditions practiced or once practiced along the eastern and southeastern coasts of Africa, is in many ways an extension of mainland African traditions. Structural elements, such as domes and barrel vaulting clearly connect, however, to the Persian Gulf area and South Asian building traditions. Exotic ornament and design elements also connected the architecture of the Swahili coast to other Islamic port cities. In fact, many of the classic mansions and palaces of the Swahili Coast belonged to wealthy merchants and landowners, who played a key role in the mercantile economy of the region. Great Zimbabwe Great Zimbabwe is a ruined city in the southeastern hills of today’s Zimbabwe in southern Africa. It was the capital of the Kingdom of Zimbabwe. Construction on the monument began in the 11th century and continued until the 15th century. The exact identity of the Great Zimbabwe builders is at present unknown. The most popular modern archaeological theory is that the edifices were erected by the ancestral Shona people. The ruins at Great Zimbabwe are some of the oldest and largest structures in Southern Africa; they are the second oldest after nearby Mapungubwe in South Africa. The most formidable edifice, commonly referred to as the Great Enclosure, is the largest ancient structure south of the Sahara Desert. The city and its state, the Kingdom of Zimbabwe, flourished from 1200 to 1500. Its growth has been linked to the decline of Mapungubwe from around 1300, due to climatic change or the greater availability of gold in the hinterland of Great Zimbabwe. At its peak, estimates are that Great Zimbabwe had as many as 18,000 inhabitants. The ruins that survive are built entirely of stone, and they span 730 ha (1,800 acres). This kingdom taxed other rulers throughout the region. It was composed of over 150 tributaries headquartered in their own minor zimbabwes (stone structures). The kingdom controlled the ivory and gold trade from the interior to the southeastern coast of Africa. The Great Zimbabwe people mined copper and iron in addition to gold. Archaeological evidence suggests that Great Zimbabwe became a center for international trading, with a trade network linked to the Kilwa Sultanate and extending as far as China. This international trade was mainly in gold and ivory. Some estimates indicate that more than 20 million ounces of gold were extracted from the ground. That international commerce was in addition to the local agricultural trade, in which cattle were especially important. The large cattle herd that supplied the city moved seasonally and was managed by the court. Archaeological evidence also suggests a high degree of social stratification, with poorer residents living outside of the city. Chinese pottery shards, coins from Arabia, glass beads, and other non-local items have been excavated. Despite these strong international trade links, there is no evidence to suggest exchange of architectural concepts between Great Zimbabwe and other centers such as Kilwa. The rulers of Zimbabwe brought artistic and stone masonry traditions from Mapungubwe. In the early 11th century, people from the Kingdom of Mapungubwe in Southern Africa are believed to have settled on the Zimbabwe plateau. There, they would establish the Kingdom of Zimbabwe around 1220. The construction of elaborate stone buildings and walls reached its apex in the Kingdom of Zimbabwe. Around 1430, prince Nyatsimba Mutota from Great Zimbabwe traveled north in search of salt among the Shona-Tavara. He defeated the Tonga and Tavara with his army and established his dynasty at Chitakochangonya Hill. The land he conquered would become the Kingdom of Mutapa. Within a generation, Mutapa eclipsed Great Zimbabwe as the economic and political power in Zimbabwe. By 1450, the capital and most of the kingdom had been abandoned. Causes suggested for the decline and ultimate abandonment of the city of Great Zimbabwe have included a decline in trade compared to sites further north, the exhaustion of the gold mines, political instability, or famine and water shortages induced by climatic change. The end of the kingdom resulted in a fragmentation of Shona power. Two bases emerged along a north-south axis. In the north, the Kingdom of Mutapa carried on and even improved upon Zimbabwe’s administrative structure. However, it did not carry on the stone masonry tradition to the extent of its predecessor. In the south, the Kingdom of Butua was established as a smaller but nearly identical version of Zimbabwe. Both states were eventually absorbed into the Rozwi Empire in the 17th century. Attributions Title Image https://commons.wikimedia.org/wiki/File:Donkeys,_Timbuktu.jpg Great Mosque of Timbuktu, Mali - Flickr user: Emilio Labrador Santiago de Chile https://www.flickr.com/people/3059349393/, CC BY 2.0 <https://creativecommons.org/licenses/by/2.0>, via Wikimedia Commons Adapted from: https://courses.lumenlearning.com/boundless-worldhistory/chapter/west-african-empires/ https://creativecommons.org/licenses/by-sa/4.0/ https://courses.lumenlearning.com/boundless-worldhistory/chapter/central-african-empires/ https://creativecommons.org/licenses/by-sa/4.0/ (https://courses.lumenlearning.com/boundless-worldhistory/chapter/east-african-empires/ https://creativecommons.org/licenses/by-sa/4.0/ https://courses.lumenlearning.com/boundless-worldhistory/chapter/southern-african-states/ https://creativecommons.org/licenses/by-sa/4.0/
oercommons
2025-03-18T00:35:07.648052
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87857/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87858/overview
Early European Exploration of Africa Overview Early European Exploration of Africa In the mid-15th century the Portuguese opened up sub-Saharan Africa to European trade. Learning Objectives - Assess how Trans-Atlantic trade affected the social and political development in Africa and the Americas. Key Terms / Key Concepts São Tomé: island off the coast of West Africa colonized by the Portuguese in 1486, which became the center of the early African slave trade Elmina: a fortified trading settlement founded by the Portuguese along the West African coast (modern Ghana) in 1482 Early European Exploration and Expansion into Africa With the arrival of Portuguese explorers and merchants along the African coasts, beginning in the 15th century, trade from Sub-Saharan Africa largely shifted away from the Muslim Arab states of North Africa and the trans-Saharan trade to the coasts of West and East Africa, where European merchants established trading posts. European merchants were at first interested in the ivory and gold trade, but in the 17th century, African slaves became the primary export from Sub-Saharan Africa as the volume of trade along the Atlantic coast exceeded the trans-Saharan trade. The demand for labor in the European controlled territories in the Western Hemisphere fueled the expansion of this notorious transatlantic slave trade through the 19th century. The Portuguese Empire The Portuguese Empire was established from the 15th century and eventually stretched from the Americas to Japan. These were often a string of coastal trading centers with defensive fortifications, but there were also larger territorial colonies like Brazil, Angola, and Mozambique. White Europeans dominated trade, politics, and society, but there was also a significant mixing of races, and in many places, people of mixed ancestry rose to positions of wealth and power in the colonies. The Portuguese began their empire as a search for access to the gold of West Africa and then the eastern spice trade. In addition, it was hoped that there might well be Christian states in Asia that could become useful allies in Christianity’s ongoing battles with the Islamic caliphates. New lands for agriculture, riches and glory for colonial adventurers, and the ambitions of missionary work were other motivations in the building of an empire. Carrack ships created a maritime network that connected Lisbon with all of its colonies in the west and the Estado da India (‘State of India’), as the empire was known east of the Cape of Good Hope. Goods like gold, ivory, silk, Ming porcelain, and spices were carried and traded around the world. Another major trade was in slaves, taken from West and southern Africa and used as labor on plantations in the North Atlantic islands and the Americas. The North Atlantic Islands The Portuguese were intrepid mariners and so it is entirely appropriate that their first colonies should be relatively remote islands. Searching for new resources and land which might solve Portugal’s deficit in wheat requirements, mariners sailed towards the unknown mid-Atlantic Ocean. The Portuguese navigators were able to mount these expeditions thanks to such rich and powerful backers as Prince Henry the Navigator (aka Infante Dom Henrique, 1394 – 1460). Another immeasurable advantage was innovative ship design and the use of the lateen triangular sail. The first group of islands to be colonized was the volcanic and uninhabited Madeira archipelago. With rich volcanic soil, mild climate, and sufficient rainfall, the islands were used to grow wheat, vines, and sugar cane. In many ways, the Portuguese colonization of Madeira would set the template that all other colonies copied. The Portuguese Crown partitioned the islands and gave out ‘captaincies’ (donatarias) as part of a feudal system designed to encourage nobles to fund agricultural and trade development. The Crown retained overall ownership. However, each captain (donatario) was given certain financial and judicial privileges, and they, in turn, gave out smaller parcels of their land (semarias) for development by their tenants who had to clear and begin cultivation within a certain number of years. These captaincies became hereditary offices in many cases. Settlers were attracted by the hope of a better life, but there were, as there would be in all future colonies, less desirable immigrants, as well. These were the undesirables (degregados); people unwanted by the authorities in Portugal who were forcibly transported to colonies, such as convicts, beggars, reformed prostitutes, orphans, Jews, and religious dissidents. Another way in which Madeira became a colonial model was sugar cane plantations, which were created as early as 1455. The success of this crop and its large labor requirement led to slaves being imported from West Africa. The slave-worked plantation system became an important part of the economy in the New World that led to the terrible traffic in humanity that was the Atlantic slave trade. After Madeira, and following the same pattern, there followed the Portuguese colonization of the Azores and the Cape Verde group. These colonies all became invaluable ports of call for ships sailing from India and the Americas. The Portuguese were not without rivals for these colonies. Portugal and Spain squabbled over possession of the Canary Islands, but the 1479-80 Treaty of Alcáçovas-Toledo and the 1494 Treaty of Tordesillas set out two spheres of influence, which audaciously encompassed the globe. The vagueness of these agreements caused trouble later, such as Portugal’s right to future discoveries in Africa and Spain’s to islands beyond the Canaries, interests which were eventually identified as the Caribbean and even the Americas. The North Atlantic islands permitted the Portuguese Crown to gain direct access to the gold of West Africa, avoiding the Islamic states in North Africa. A significant obstacle had been Cape Bojador which seemed to block sailing ships from going south and then returning home to Europe. A solution to this issue was provided by the Atlantic islands and setting a bold course out away from the African coastline to best use winds, currents, and high-pressure areas. Portuguese mariners could then sail south with confidence, and the ultimate result was the opening up of Asia to European ships. West Africa & Slavery The Portuguese, keen to access the West African gold and salt trade, set up several fortified trading settlements along the southern coast (modern Ghana), such as at Elmina in 1482. However, tropical diseases, a lack of manpower, and a reluctance by local rulers to allow male slaves to be exported meant that, at least initially, the profits were limited along the southern coast. African chiefs were keen to trade for firearms, but the Portuguese were not interested in giving them such power. A more successful strategy focused on the uninhabited islands of Sao Tome and Principe, located off the southern coast of West Africa, which were colonized beginning in 1486. The two islands became heavily involved in the slave trade, and, as in the North Atlantic, the captaincy model for development was used. Settlers on the islands were permitted to trade with communities in West Africa, and those trades proved more successful than the attempts made a few decades before. Portuguese trade settlements were established on the continent as far south as Luanda (in modern Angola) to take advantage of the well-organized African trade that saw goods travel from the interior along the major rivers (e.g. Gambia and Senegal) to the coast. Goods acquired included gold, ivory, pepper, beeswax, gum, and dyewoods. Slaves (men and women) were acquired from the Kingdom of Kongo and Kingdom of Benin, the rulers of which were eager for European trade goods like cotton cloth, mirrors, knives, and glass beads. The islands acted as a gathering point for slaves and as a place to take onboard provisions for the ships that would carry the human cargo. One in five slaves died on these ships, but as many as one in two slaves died between initial capture and arrival at their final destination. There was little attempt at territorial conquest in West Africa as trade was thriving and the Europeans did not possess the military resources for such a policy. Some settlements were fortified, but this was usually done with permission from the local African tribal chief. Europeans and resettled Africans had intermarried on islands— such as the Cape Verde group—creating an Afro-Portuguese culture, which had a strong African religious and artistic influence. It was very often these free mixed-race Cape Verdeans (mulattoes) who settled in the trading posts on the coast of Africa. There were moves to cut out African chiefs and directly acquire slaves from the interior, but this policy soured relations with Kongo. The situation further deteriorated following a reaction against Christian missionaries as traditional cultural activities and tribal loyalties broke down. The Europeans were obliged to move further down the coast to the Ndongo region, where their interference led to a series of wars in a region that soon after became Portuguese Angola. East Africa In 1498 the explorer Vasco da Gama (c. 1469 – 1524) sailed around the Cape of Good Hope and into the Indian Ocean, and the Portuguese suddenly gained access to a whole new trade network involving Africans, Indians, and Arabs. The trade network had existed for centuries, but when the Portuguese arrived commerce became violent. Using superior ships and cannons, the Portuguese blasted rival ships out of the water. Their crews were arrested or killed and their cargoes confiscated. The fact that most traders were Muslim was an added motivation for the Europeans who were still beset with a crusader mentality. Portuguese attacks on the independent trading cities of the Swahili Coast and on the inland Kingdom of Mutapa in the south (Zimbabwe/Zambia) did not bring any tangible benefits as traders simply moved to the north or avoided them. When the Portuguese had taken over and fortified the likes of Malindi, Mombasa, Pemba, Sofala, and Kilwa, they found they had already lost the trading partners of these city-states. Then the Omani Arabs of the Persian Gulf arrived. Keen to keep hold of their Red Sea trade routes and re-establish the age-old trade networks, the Omani moved in on the Swahili Coast and captured many cities, including Portuguese Mombasa in 1698. The lack of success in East Africa eventually drove the Portuguese south to Mozambique, but they were already wholly distracted by the potential of a newly discovered area of the world: India. By the mid-17th century, however, the Portuguese no longer possessed a monopoly on African trade, which they had previously enjoyed at the beginning of the 16th century. English, French, Dutch, Swede, and Danish merchants were all competing with one another for access to this market and its most valuable export: slaves. Attributions Title Image https://commons.wikimedia.org/wiki/File:Ghana,_het_fort_Sint_George_d%27Elmina_(3381211949).jpg Elmina, Ghana - Nationaal Archief, Public domain, via Wikimedia Commons Adapted from: Cartwright, Mark. "Portuguese Empire." World History Encyclopedia. Last modified July 19, 2021. https://www.worldhistory.org/Portuguese_Empire/. https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en
oercommons
2025-03-18T00:35:07.674987
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87858/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87859/overview
Origins of Indigenous Peoples of the Americas Overview Origins of Indigenous Peoples of the America The indigenous peoples of the Americas arrived in the Western Hemisphere during the last Global Ice Age sometime over 12,000 years ago. Learning Objectives Discuss the origins of the indigenous peoples of the Americas. Key Terms / Key Concepts Clovis Culture: early hunters and gatherers in North America c. 12,000 – 8000 BCE who shared a common culture based on the stone tools that they produced Simple culture: a culture with a subsistence economy and egalitarian society, that is organized by ties of kinship Complex culture: a culture that produces a surplus of goods and possesses a hierarchal social stratified society, as well as a formal system of government Origins of Indigenous Peoples of the America Our understanding of the settlement of the Western Hemisphere by indigenous Americans has been clouded recently by some spectacular archaeological discoveries. Historians once agreed that the ancestors of indigenous Americans crossed over from Northeast Asia to North America along a land bridge, which once connected the two continents, which is today the Bering Strait. According to this theory, these nomadic hunters and gatherers migrated during the last Ice Age when ocean levels were much lower than today, thereby exposing this land bridge. Moreover, it was maintained that many of these immigrants arrived around 12,000 BC and that these "Paleo-Indians" shared a common culture—which was labeled Clovis Culture—and flourished until approximately 8,000 BCE. The term "Clovis" was coined to describe the types of stone tools and weapons, which were originally found in Clovis, New Mexico and later at many Paleolithic (Old Stone Age) sites across the continental United States. Recent archaeological discoveries have resulted in the proposal of alternative theories concerning the immigration of Native Americans. For example, excavations at Monte Verde in Chile have shown that a unique culture flourished at this site in South America earlier than the Clovis culture in North America. This discovery indicates that the migration across the Bering Strait land bridge must have begun far earlier than 12,000 BCE. Other recent archaeological discoveries have cast doubt on the notion that this land bridge was the sole route for the entry of immigrants into the Western Hemisphere. In 1996 the discovery of the 9,000-year-old skeletal remains of "Kennewick Man" in Washington state stunned historians because of his "Caucasoid" features—physical traits associated with the peoples of Europe, the Middle East, and Eurasia. Up to this point, Historians were in agreement that indigenous Americans were physically akin to the peoples of East Asia, such as the Chinese, Japanese, and Koreans. The physical similarities between "Kennewick Man" and peoples in the isles of Polynesia and Japan (the Ainu), who also possess "Caucasoid" features, has fueled speculation that the ancestors of the Native Americans may include people from these parts of Asia who made the voyage across the Pacific. The Development of Simple and Complex Cultures Beginning around 7000 BCE and continuing for the next 5000 years, the peoples of the Americas witnessed the development of diverse cultures. Different population groups adapted to particular environments, which played an important role in shaping their respective cultural traits (i.e. language, religious beliefs, social customs, and means of subsistence). Consequently, in the continental US, for example, the culture of the inhabitants of the arid southwest differed from that of those dwelling in the woodlands, east of the Mississippi. This adaptation often involved the use of new technology such as more refined and complex tools and weapons (i.e. the bow), as well as the domestication of plants with the introduction of agriculture. The attributes of simple cultures apply broadly to many of these different Native American peoples. Historians label cultures worldwide as either simple or complex based on certain criteria concerning their economy, social structure, and political organization. The level of simplicity or complexity varies from culture to culture. A nomadic hunter and gatherer society represents the simplest culture, whereas American society in the 21st century represents the most complex. The majority of cultures that had developed within the continental US and flourished until the arrival of Europeans were more simple than complex. Simple cultures generally possess a "subsistence economy" with very little economic specialization. In other words, families in such cultures rely mainly on their own resources and skills to acquire and to produce their basic needs, such as food, shelter, and clothing. There are very few specialized occupations since most members of society must devote much time to meeting basic human needs. Artistic work from such societies, for example, is often less sophisticated than that from more complex societies since artists didn’t have time to develop their skills. Political organization among simple cultures is based on ties of kinship. Families are grouped into clans, who all may share a common ancestor. Clans in turn form larger configurations of tribes, whose members believe that they share a common ancestral origin. In these cultures, community decisions—such as those concerning war or peace or the settlement of internal disputes—are reached by consultation with the clan or tribal elders. The community selects its leaders in such societies based on their personal qualities (i.e. courage, fighting ability, public speaking skills, and charisma). The social structure of such societies is rather egalitarian. The standard of living of the wealthiest and most prestigious families differs little from that of the poorest and least prestigious. Since there are few opportunities to amass wealth in such societies, it is difficult for any elite group of families to dominate others for an extended period. Before the first wave of European explorers arrived on the shores of the New World, complex cultures or civilizations emerged in the Continental United States. These societies, however, did not achieve the same size and wealth as those more complex cultures of Central and South America such as the Maya, Aztec, or Inca. Complex Cultures Complex cultures differ substantially from simple ones regarding their economic, social, and political organization. Complex cultures require an agricultural surplus. Farmers must produce more than what they need for themselves or their families. This resulting surplus feeds various specialists who are not engaged in agriculture and who provide society with various goods or services, such as craftsmen, traders, officials, or priests. In all complex societies worldwide before the advent of the Industrial Revolution in Europe in the late eighteenth century around 90% of the population were needed to serve as farmers to raise this surplus, which sustained the remaining 10%. These societies must also have an exchange and distribution system so that the surplus can be distributed among those who are not engaged in agriculture. This system is centered in an urban area or city where the non-farming population resides. Complex societies are also socially stratified. Society is organized into different classes or strata, individual members of which possess a similar level of wealth and status. Stratified societies are hierarchical in that these strata are arranged in descending order, like layers of a birthday cake. The position of each class in this hierarchy depends on the degree of prestige and wealth of its collective membership. Membership in a particular class often imparts certain rights and privileges that are denied to members of a lower class. Stratified societies also possess an elite or ruling class, which sits a top of this hierarchy like frosting on a cake, whose members alone provide leadership to their communities based on their abundant resources and high social standing. In stratified societies, the standard of living of those in the upper classes is often much more affluent than those in the lower classes. A complex culture has a formal system of government or state. Decisions regarding the community as a whole and the settlement of internal disputes fall under the jurisdiction of the state and its laws. Participation in the government is often limited to certain classes or to a ruling elite alone, who specialize in performing government functions (i.e. judges, generals, lawmakers, and financial officials). Primary Source: Alvar Nuñez Cabeza de Vaca "Indians of the Rio Grande" Alvar Núñez Cabeza de Vaca (1528-1536) They are so accustomed to running that, without resting or getting tired, they run from morning till night in pursuit of a deer, and kill a great many, because they follow until the game is worn out, sometimes catching it alive. Their huts are of matting placed over four arches. They carry them on their back and move every two or three days in quest of food; they plant nothing that would be of any use. They are very merry people, and even when famished do not cease to dance and celebrate their feasts and ceremonials. Their best times are when "tunas" (prickly pears) are ripe, because then they have plenty to eat and spend the time in dancing and eating day and night. As long as these tunas last they squeeze and open them and set them to dry. When dried they are put in baskets like figs and kept to be eaten on the way. The peelings they grind and pulverize. All over this country there are a great many deer, fowl and other animals which I have before enumerated. Here also they come up with cows; I have seen them thrice and have eaten their meat. They appear to me of the size of those in Spain. Their horns are small, like those of the Moorish cattle; the hair is very long, like fine wool and like a peajacket; some are brownish and others black, and to my taste they have better and more meat than those from here. Of the small hides the Indians make blankets to cover themselves with, and of the taller ones they make shoes and targets. These cows come from the north, across the country further on, to the coast of Florida, and are found all over the land for over four hundred leagues. On this whole stretch, through the valleys by which they come, people who live there descend to subsist upon their flesh. And a great quantity of hides are met with inland. We remained with the Avavares Indians for eight months, according to our reckoning of the moons. During that time they came for us from many places and said that verily we were children of the sun. Until then Donates and the negro had not made any cures, but we found ourselves so pressed by the Indians coming from all sides, that all of us had to become medicine men. I was the most daring and reckless of all in undertaking cures. We never treated anyone that did not afterwards say he was well, and they had such confidence in our skill as to believe that none of them would die as long as we were among them. . . . The women brought many mats, with which they built us houses, one for each of us and those attached to him. After this we would order them to boil all the game, and they did it quickly in ovens built by them for the purpose. We partook of everything a little, giving the rest to the principal man among those who had come with us for distribution among all. Every one then came with the share he had received for us to breathe on it and bless it, without which they left it untouched. Often we had with us three to four thousand persons. And it was very tiresome to have to breathe on and make the sign of the cross over every morsel they ate or drank. For many other things which they wanted to do they would come to ask our permission, so that it is easy to realize how greatly we were bothered. The women brought us tunas, spiders, worms, and whatever else they could find, for they would rather starve than partake of anything that had not first passed through our hands. While traveling with those, we crossed a big river coming from the north and, traversing about thirty leagues of plains, met a number of people that came from afar to meet us on the trail, who treated us like the foregoing ones. Thence on there was a change in the manner of reception, insofar as those who would meet us on the trail with gifts were no longer robbed by the Indians of our company, but after we had entered their homes they tendered us all they possessed, and the dwellings also. We turned over everything to the principals for distribution. Invariably those who had been deprived of their belongings would follow us, in order to repair their losses, so that our retinue became very large. They would tell them to be careful and not conceal anything of what they owned, as it could not be done without our knowledge, and then we would cause their death. So much did they frighten them that on the first few days after joining us they would be trembling all the time, and would not dare to speak or lift their eyes to Heaven. Those guided us for more than fifty leagues through a desert of very rugged mountains, and so arid that there was no game. Consequently we suffered much from lack of food, and finally forded a very big river, with its water reaching to our chest. Thence on many of our people began to show the effects of the hunger and hardships they had undergone in those mountains, which were extremely barren and tiresome to travel. The next morning all those who were strong enough came along, and at the end of three journeys we halted. Alonso del Castillo and Estevanico, the negro, left with the women as guides, and the woman who was a captive took them to a river that flows between mountains where there was a village in which her father lived, and these were the first adobes we saw that were like unto real houses. Castillo and Estevanico went to these and, after holding parley with the Indians, at the end of three days Castillo returned to where he had left us, bringing with him five or six of the Indians. He told how he had found permanent houses, inhabited, the people of which ate beans and squashes, and that he had also seen maize. Of all things upon earth that caused us the greatest pleasure, and we gave endless thanks to our Lord for this news. Castillo also said that the negro was coming to meet us on the way, near by, with all the people of the houses. For that reason we started, and after going a league and a half met the negro and the people that came to receive us, who gave us beans and many squashes to eat, gourds to carry water in, robes of cowhide, and other things. As those people and the Indians of our company were enemies, and did not understand each other, we took leave of the latter, leaving them all that had been given to us, while we went on with the former and, six leagues beyond, when night was already approaching, reached their houses, where they received us with great ceremonies. Here we remained one day, and left on the next, taking them with us to other permanent houses, where they subsisted on the same food also, and thence on we found a new custom. . . . Having seen positive traces of Christians and become satisfied they were very near, we gave many thanks to our Lord for redeeming us from our sad and gloomy condition. Anyone can imagine our delight when he reflects how long we had been in that land, and how many dangers and hardships we had suffered. That night I entreated one of my companions to go after the Christians, who were moving through the part of the country pacified and quieted by us, and who were three days ahead of where we were. They did not like my suggestion, and excused themselves from going, on the ground of being tired and worn out, although any of them might have done it far better than I, being younger and stronger. Seeing their reluctance, in the morning I took with me the negro and eleven Indians and, following the trail, went in search of the Christians. On that day we made ten leagues, passing three places where they slept. The next morning I came upon four Christians on horseback, who, seeing me in such a strange attire, and in company with Indians, were greatly startled. They stared at me for quite awhile, speechless; so great was their surprise that they could not find words to ask me anything. I spoke first, and told them to lead me to their captain, and we went together to Diego de Alcaraz, their commander. Study Questions: 1. Summarize Cabeza de Vaca’s impression of the people he came upon during his journey. What are his impressions of their habits and customs? What seems to be his attitude toward these people? 2. How were the author and his companions received and treated by the Avavares Indians? 3. Describe the various difficulties faced by Cabeza de Vaca and his companions during their travels. Attributions Title Image https://commons.wikimedia.org/wiki/File:Fort-blount-paleo-indian-tn1.jpg Paleo-Indian point found on the Fox Farm (which contains the Fort Blount-Williamsburg site) in Jackson County, Tennessee, USA.. Collection of Gene Smith, Jackson County, Tenn. Brian Stansberry, CC BY 3.0 <https://creativecommons.org/licenses/by/3.0>, via Wikimedia Commons User:Roblespepe, CC BY-SA 3.0 <https://creativecommons.org/licenses/by-sa/3.0>, via Wikimedia Commons
oercommons
2025-03-18T00:35:07.700451
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87859/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87860/overview
Maya Overview Maya The Maya were among the first big empires of the ancient Meso-American peoples. They provided the foundation of different indigenous groups of the Meso-American populations. The Aztecs then built on to what the Mayans created. Learning Objectives Evaluate the differences between the Pre-Classical Period of the Mayan and the Mayan Classical Evaluate the impact of the environment on the Mayan peoples. Key Terms / Key Concepts stelae: Carved stones depicting rulers with heirogliphic texts describing their accomplishments. Copán: An important city that boasted some of the most complex architecture from the Classic period of Maya history Tzolkin: This 365-day solar calendar utilized the movement of Earth around the Sun to calculate the year. Mayapan: The cultural capital of the Maya culture during the Postclassic period. It was at its height between 1220 and 1440 CE. Yucatán: A geographic area in the south of modern day Mexico near Belize. The Pre-classic Period of the Maya The Preclassic period is the first of three periods in Mayan history, coming before the Classic and Postclassic periods. It extended from the emergence of the first settlements sometime between 2000 and 1500 BCE until 250 CE. The Preclassic period saw the rise of large-scale ceremonial architecture, writing, cities, and states. Many of the distinctive elements of Mesoamerican civilization can be traced back to this period, including the dominance of corn, the building of pyramids, human sacrifice, jaguar worship, the complex calendar, and many of the gods. Mayan language speakers most likely originated in the Chiapas-Guatamalan Highlands and dispersed from there. By around 2500–2000 BCE researchers can begin to trace the arc of Mayan-language settlements and culture in what is now southeastern Mexico, Guatemala, and Belize. The peoples of the Central Mexican region had many groups of peoples that all influenced the Mayan populations. The trade and artwork demonstrate how many resources flowed throughout the region. The implementation of maize was important as the pre-Mayan population grew significantly throughout this period because this food was so important. Many of the villages were local food producers and this transition from hunter gatherer was a significant one for the pre-Mayan peoples. The Olmec were the important group that had important influence on the pre-Mayan groups, including art and religion. By the 1st millennium BCE, the formation of the pre-Mayan culture started to form the first Mayan city-state. The city-state developed a massive government and numerous monuments. Some of the biggest innovations in this first millennium BCE period was the development of a glyph-based writing system and the concept of the number zero. The development of writing and zero helped to create strong records and architectural wonders. The end of the pre-Mayan populations remains mostly a mystery today. The period of 100-250 CE saw significant climate shift towards a warmer period. This would have impacted the rains and irrigation of the Yucatàn Penninsula, directly creating problems for the populations and agricultural basis. The Classic Period of the Maya The Classic period lasted from 250 to 900 CE and was the peak of the Maya civilization. The Classic period lasted from 250 to 900 CE. It saw a peak in large-scale construction and urbanism, the recording of monumental inscriptions, and significant intellectual and artistic development, particularly in the southern lowland regions. During this period the Maya population numbered in the millions, with many cities containing 50,000 to 120,000 people. The Maya developed an agriculturally intensive, city-centered civilization consisting of numerous independent city-states of varying power and influence. They created a multitude of kingdoms and small empires, built monumental palaces and temples, engaged in highly developed ceremonies, and developed an elaborate hieroglyphic writing system. The Mayan cities of the Classic Maya world system were located in the central lowlands, while the corresponding peripheral Maya units were found along the margins of the southern highland and northern lowland areas. The semi-peripheral units generally took the form of trade and commercial centers. But as in all world systems, the Maya core centers shifted through time, starting out during Preclassic times in the southern highlands, moving to the central lowlands during the Classic period, and finally shifting to the northern peninsula during the Postclassic period. Monuments The most notable monuments are the stepped pyramids the Maya built in their religious centers and the accompanying palaces of their rulers. The palace at Cancuén is the largest in the Maya area, but the site has no pyramids. Copàn came to its full power between the 6th and 8th centuries, and included massive temples and carvings that illustrate the full power of its ruling, and often merciless, emperors. Cities in the southeastern region were also cultural and religious centers, and included large temples, ball courts, and even a uniquely vaulted ceiling in the hallway of the Palenque Palace. Other important archaeological remains include the carved stone slabs usually called stelae (the Maya called them tetun, or “tree-stones”), which depict rulers along with hieroglyphic texts describing their genealogy, military victories, and other accomplishments. Trade The political relationship between Classic Maya city-states has been likened to the relationships between city-states in Classical Greece and Renaissance Italy. Some cities were linked to each other by straight limestone causeways, known as sacbeob. Whether the exact function of these roads was commercial, political, or religious has not been determined. The Maya civilization participated in long distance trade with many other Mesoamerican cultures, including Teotihuacan, the Zapotec, and other groups in central and gulf-coast Mexico. In addition, they traded with more distant, non-Mesoamerican groups, such as the Taínos of the Caribbean islands. Archeologists have also found gold from Panama in the Sacred Cenote of Chichen Itza. Important trade goods included: - Cacao - Salt - Seashells - Jade - Obsidian Calendars and Religion The Maya utilized complex mathematical and astronomical calculations to build their monuments and conceptualize the cosmography of their religion. Each of the four directions represented specific deities, colors, and elements. The underworld, the cosmos, and the great tree of life at the center of the world all played their part in how buildings were built and when feasts or sacrifices were practiced. Ancestors and deities helped weave the various levels of existence together through ritual, sacrifice, and measured solar years. The Maya developed a mathematical system that is strikingly similar to the Olmec traditions. The Maya also linked this complex system to the deity Itzamna. This deity was believed to have brought much of Maya culture to Earth. A 260-day calendar ( Tzolkin ) was combined with the 365-day solar calendar (Haab’) to create a calendar round. This calendar round would take fifty-two solar years to return to the original first date. The Tzolkin calendar was used to calculate exact religious festival days. It utilized twenty named days that repeated thirteen times in that calendar year. The solar calendar (Haab’) is very similar to the modern solar calendar year that uses Earth’s orbit around the Sun to measure time. The Maya believed there were five chaotic days at the end of the solar year that allowed the portals between worlds to open up, known as Wayeb’. These calendars were recorded utilizing specific symbols for each day in the two central cycles. Calendrical stones were employed to carefully follow the movement of the solar and religious years. Although less commonly used, the Maya also employed a long count calendar that calculated dates hundreds of years in the future. They also inscribed a lengthier 819-day calendar on many religious temples throughout the region that most likely coincided with important religious days. Decline The Classic Maya Collapse refers to the decline of the Maya Classic Period and abandonment of the Classic Period Maya cities of the southern Maya lowlands of Mesoamerica between the 8th and 9th centuries. This should not be confused with the collapse of the Preclassic Maya in the 2nd century CE. The Classic Period of Mesoamerican chronology is generally defined as the period from 300 to 900 CE, the last 100 years of which, from 800 to 900 CE, are frequently referred to as the Terminal Classic. It has been hypothesized that the decline of the Maya is related to the collapse of their intricate trade systems, especially those connected to the central Mexican city of Teotihuacán. Before there was a greater knowledge of the chronology of Mesoamerica, Teotihuacan was believed to have fallen during 700–750 CE, forcing the “restructuring of economic relations throughout highland Mesoamerica and the Gulf Coast.” This remaking of relationships between civilizations would have then given the collapse of the Classic Maya a slightly later date. However, it is now believed that the strongest Teotihuacan influence was during the 4th and 5th centuries. In addition, the civilization of Teotihuacan started to lose its power, and maybe even abandoned the city, during 600–650 CE. The Maya civilizations are now thought to have lived on, and also prospered, perhaps for another century after the fall of Teotihuacano influence. The Classic Maya Collapse is one of the biggest mysteries in archaeology. The classic Maya urban centers of the southern lowlands, went into decline during the 8th and 9th centuries and were abandoned shortly thereafter. Some 88 different theories, or variations of theories, attempting to explain the Classic Maya Collapse have been identified. From climate change, to deforestation, to lack of action by Maya kings, there is no universally accepted collapse theory, although drought is gaining momentum as the leading explanation. The Decline of the Maya The period after the second collapse of the Maya Empire (900 CE–1600 CE) is called the Postclassic period. The period called the Postclassic period. The center of power shifted from the central lowlands to the northern peninsula as populations most likely searched for reliable water resources, along with greater social stability. The Maya cities of the northern lowlands in Yucatàn continued to flourish. A typical Classic Maya polity was a small hierarchical state (called an ajawil, ajawlel, or ajawlil) headed by a hereditary ruler known as an ajaw (later k’uhul ajaw). However, the Postclassic period generally saw the widespread abandonment of once-thriving sites as populations gathered closer to water sources. Warfare most likely caused populations in long-inhabited religious cities, like Kuminaljuyu, to be abandoned in favor of smaller, hilltop settlements that had a better advantage against warring factions. Painted mural at San Bartolo from around 100 BCE: This colorful mural depicts a king practicing bloodletting, probably for an inauguration or other sacrificial purpose. Postclassic Cities Maya cities during this era were dispersed settlements, often centered around the temples or palaces of a ruling dynasty or elite in that particular area. Cities remained the locales of administrative duties and royal religious practices, and the sites where luxury items were created and consumed. City centers also provided the sacred space for privileged nobles to approach the holy ruler and the places where aesthetic values of the high culture were formulated and disseminated and where aesthetic items were consumed. These more established cities were the self-proclaimed centers of social, moral, and cosmic order. If a royal court fell out of favor with the people, as in the well-documented cases of Piedras Negras or Copàn, this fall from power would cause the inevitable “death” and abandonment of the associated settlement. After the decline of the ruling dynasties of Chichén Itzá, Mayapan became the most important cultural site until about 1450 CE. This city’s name may be the source of the word “Maya,” which had a more geographically restricted meaning in Yucatec and colonial Spanish. The name only grew to its current meaning in the 19th and 20th centuries. The area degenerated into competing city-states until the Spanish arrived in the Yucatàn and shifted the power dynamics. Artistry, Architecture, and Religion The Postclassic period is often viewed as a period of cultural decline. However, it was a time of technological advancement in areas of architecture, engineering, and weaponry. Metallurgy came into use for jewelry and the development of some tools utilizing new metal alloys and metalworking techniques that developed within a few centuries. And although some of the classic cities had been abandoned after 900 CE, architecture continued to develop and thrive in newly flourishing city-states, such as Mayapan. Religious and royal architecture retained themes of death, rebirth, natural resources, and the afterlife in their motifs and designs. Ballcourts, walkways, waterways, pyramids, and temples from the Classic period continued to play essential roles in the hierarchical world of Maya city-states. Maya religion continued to be centered around the worship of male ancestors. These patrilineal intermediaries could vouch for mortals in the physical world from their position in the afterlife. Archeological evidence shows that deceased relatives were buried under the floor of family homes. Royal dynasties built pyramids in order to bury their ancestors. This patrilineal form of worship was used by some royal dynasties in order to justify their right to rule. The afterlife was complex, and included thirteen levels in heaven and nine levels in the underworld, which had to be navigated by an initiated priesthood, ancestors, and powerful deities. Precise food preparation, offerings, and astronomical predictions were all required for religious practices. Powerful deities that often represented natural elements, such as jaguars, rain, and hummingbirds, needed to be placated with offerings and prayers regularly. Many of the motifs on large pyramids and temples of the royal dynasties reflect the worship of both deities and patrilineal ancestors and provide a window into the daily practices of this culture before the arrival of Spanish forces. Primary Source: The Popul Vuh The Popul Vuh Lewis Spence (July, 1908) PREFACE THE "Popol Vuh" is the New World's richest mythological mine. No translation of it has as yet appeared in English, and no adequate translation in any European language. It has been neglected to a certain extent because of the unthinking strictures passed upon its authenticity. That other manuscripts exist in Guatemala than the one discovered by Ximenes and transcribed by Scherzer and Brasseur de Bourbourg is probable. So thought Brinton, and the present writer shares his belief. And ere it is too late it would be well that these--the only records of the faith of the builders of the mystic ruined and deserted cities of Central America--should be recovered. This is not a matter that should be left to the enterprise of individuals, but one which should engage the consideration of interested governments; for what is myth to-day is often history to-morrow. THE POPOL VUH [The numbers in the text refer to notes at the end of the study] THERE is no document of greater importance to the study of the pre-Columbian mythology of America than the "Popol Vuh." It is the chief source of our knowledge of the mythology of the Kiché people of Central America, and it is further of considerable comparative value when studied in conjunction with the mythology of the Nahuatlacâ, or Mexican peoples. This interesting text, the recovery of which forms one of the most romantic episodes in the history of American bibliography, was written by a Christianised native of Guatemala some time in the seventeenth century, and was copied in the Kiché language, in which it was originally written, by a monk of the Order of Predicadores, one Francisco Ximenes, who also added a Spanish translation and scholia. The Abbé Brasseur de Bourbourg, a profound student of American archæology and languages (whose euhemeristic interpretations of the Mexican myths are as worthless as the priceless materials he unearthed are valuable) deplored, in a letter to the Duc de Valmy1, the supposed loss of the "Popol Vuh," which be was aware had been made use of early in the nineteenth century by a certain Don Felix Cabrera. Dr. C. Scherzer, an Austrian scholar, thus made aware of its value, paid a visit to the Republic of Guatemala in 1854 or 1855, and was successful in tracing the missing manuscript in the library of the University of San Carlos in the city of Guatemala. It was afterwards ascertained that its scholiast, Ximenes, had deposited it in the library of his convent at Chichicastenango whence it passed to the San Carlos library in 1830. Scherzer at once made a copy of the Spanish translation of the manuscript, which he published at Vienna in 1856 under the title of "Las Historias del origen de los Indios de Guatemala, par el R. P. F. Francisco Ximenes." The Abbé Brasseur also took a copy of the original, which be published at Paris in 1861, with the title "Vuh Popol: Le Livre Sacré de Quichés, et les Mythes de l'Antiquité Américaine." In this work the Kiché original and the Abbé's French translation are set forth side by side. Unfortunately both the Spanish and the French translations leave much to be desired so far as their accuracy is concerned, and they are rendered of little use by reason of the misleading notes which accompany them. The name "Popol Vuh" signifies "Record of the Community," and its literal translation is "Book of the Mat," from the Kiché words "pop" or "popol," a mat or rug of woven rushes or bark on which the entire family sat, and "vuh" or "uuh," paper or book, from "uoch" to write. The "Popol Vuh" is an example of a world-wide genre--a type of annals of which the first portion is pure mythology, which gradually shades off into pure history, evolving from the hero-myths of saga to the recital of the deeds of authentic personages. It may, in fact, be classed with the Heimskringla of Snorre, the Danish History of Saxo-Grammaticus, the Chinese History in the Five Books, the Japanese "Nihongi," and, so far as its fourth book is concerned, it somewhat resembles the Pictish Chronicle. The language in which the "Popol Vuh" was written was, as has been said, the Kiché, a dialect of the great Maya-Kiché tongue spoken at the time of the Conquest from the borders of Mexico on the north to those of the present State of Nicaragua on the south; but whereas the Mayan was spoken in Yucatan proper, and the State of Chiapas, the Kiché was the tongue of the peoples of that part of Central America now occupied by the States of Guatemala, Honduras and San Salvador, where it is still used by the natives. It is totally different to the Nahuatl, the language of the peoples of Anahuac or Mexico, both as regards its origin and structure, and its affinities with other American tongues are even less distinct than those between the Slavonic and Teutonic groups. Of this tongue the "Popol Vuh" is practically the only monument; at all events the only work by a native of the district in which it was used. A cognate dialect, the Cakchiquel, produced the "Annals " of that people, otherwise known as "The Book of Chilan Balam," a work purely of genealogical interest, which may be consulted in the admirable translation of the late Daniel G. Brinton. The Kiché people at the time of their discovery, which was immediately subsequent to the fall of Mexico, had in part lost that culture which was characteristic of the Mayan race, the remnants of which have excited universal wonder in the ruins of the vast desert cities of Central America. At a period not far distant from the Conquest the once centralised Government of the Mayan peoples had been broken up into petty States and Confederacies, which in their character recall the city-states of mediæval Italy. In all probability the civilisation possessed by these peoples had been brought them by a race from Mexico called the Toltecs, who taught them the arts of building in stone and writing in hieroglyphics, and who probably influenced their mythology most profoundly. The Toltecs were not, however, in any way cognate with the Mayans, and were in all likelihood rapidly absorbed by them. The Mayans were notably an agricultural people, and it is not impossible that in their country the maize-plant was first cultivated with the object of obtaining a regular cereal supply. Such, then, were the people whose mythology produced the body of tradition and mythi-history known as the "Popol Vuh"; and ere we pass to a consideration of their beliefs, their gods, and their religious affinities, it will be well to summarise the three books of it which treat of these things, as fully as space will permit, using for that purpose both the French translation of Brasseur and the Spanish one of Ximenes. THE FIRST BOOK Over a universe wrapped in the gloom of a dense and primeval night passed the god Hurakan, the mighty wind. He called out "earth," and the solid land appeared. The chief gods took counsel; they were Hurakan, Gucumatz, the serpent covered with green feathers, and Xpiyacoc and Xmucane, the mother and father gods. As the result of their deliberations animals were created. But as yet man was not. To supply the deficiency the divine beings resolved to create mannikins carved out of wood. But these soon incurred the displeasure of the gods, who, irritated by their lack of reverence, resolved to destroy them. Then by the will of Hurakan, the Heart of Heaven, the waters were swollen, and a great flood came upon the mannikins of wood. They were drowned and a thick resin fell from heaven. The bird Xecotcovach tore out their eyes; the bird Camulatz cut off their heads; the bird Cotzbalam devoured their flesh; the bird Tecumbalam broke their bones and sinews and ground them into powder. Because they had not thought on Hurakan, therefore the face of the earth grew dark, and a pouring rain commenced, raining by day and by night. Then all sorts of beings, great and small, gathered together to abuse the men to their faces. The very household utensils and animals jeered at them, their mill-stones2, their plates, their cups, their dogs, their hens. Said the dogs and hens, "Very badly have you treated us, and you have bitten us. Now we bite you in turn." Said the mill-stones (metates), " Very much were we tormented by you, and daily, daily, night and day, it was squeak, screech, screech,3 for your sake. Now you shall feel our strength, and we will grind your flesh and make meal of your bodies." And the dogs upbraided the mannikins because they had not been fed, and tore the unhappy images with their teeth. And the cups and dishes said, "Pain and misery you gave us, smoking our tops and sides, cooking us over the fire burning and hurting us as if we had no feeling. Now it is your turn, and you shall burn." Then ran the mannikins hither and thither in despair. They climbed to the roofs of the houses, but the houses crumbled under their feet; they tried to mount to the tops of the trees, but the trees hurled them from them; they sought refuge in the caverns, but the caverns closed before them. Thus was accomplished the ruin of this race, destined to be overthrown. And it is said that their posterity are the little monkeys who live in the woods. THE MYTH OF VUKUB-CAKIX After this catastrophe, ere yet the earth was quite recovered from the wrath of the gods, there existed a man "full of pride," whose name was Vukub-Cakix. The name signifies "Seven-times-the-colour-of-fire," or "Very brilliant," and was justified by the fact that its owner's eyes were of silver, his teeth of emerald, and other parts of his anatomy of precious metals. In his own opinion Vukub-Cakix's existence rendered unnecessary that of the sun and the moon, and this egoism so disgusted the gods that they resolved upon his overthrow. His two sons, Zipacna and Cabrakan (earth-heaper4 (?) and earthquake), were daily employed, the one in heaping up mountains, and the other in demolishing thorn, and these also incurred the wrath of the immortals. Shortly after the decision of the deities the twin hero-gods Hun-Ahpu and Xbalanque came to earth with the intention of chastising the arrogance of Vukub-Cakix and his progeny. Now Vukub-Cakix had a great tree of the variety known in Central America as "nanze" or "tapal," bearing a fruit round, yellow, and aromatic, and upon this fruit he depended for his daily sustenance. One day on going to partake of it for his morning meal he mounted to its summit in order to espy the choicest fruits, when to his great indignation he discovered that Hun-Ahpu and Xbalanque had been before him, and had almost denuded the tree of its produce. The hero-gods, who lay concealed within the foliage, now added injury to theft by hurling at Vukub-Cakix a dart from a blow-pipe, which bad the effect of precipitating him from the summit of the tree to the earth. He arose in great wrath, bleeding profusely from a severe wound in the jaw. Hun-Ahpu then threw himself upon Vukub-Cakix, who in terrible anger seized the god by the arm and wrenched it from the body. He then proceeded to his dwelling, where he was met and anxiously interrogated by his spouse Chimalmat. Tortured by the pain in his teeth and jaw be, in an access of spite, hung Hun-Ahpu's arm over a blazing fire, and then threw himself down to bemoan his injuries, consoling himself, however, with the idea that he had adequately avenged himself upon the interlopers who had dared to disturb his peace. But Hun-Ahpu and Xbalanque were in no mind that he should escape so easily, and the recovery of Hun-Ahpu's arm must be made at all hazards. With this end in view they consulted two venerable beings in whom we readily recognise the father-mother divinities, Xpiyacoc and Xmucane, disguised for the nonce as sorcerers. These personages accompanied Hun-Ahpu and Xbalanque to the abode of Vukub-Cakix, whom they found in a state of intense agony. The ancients persuaded him to be operated upon in order to relieve his sufferings, and for his glittering teeth they substituted grains of maize. Next they removed his eyes of emerald, upon which his death speedily followed, as did that of his wife Chimalmat. Hun-Ahpu's arm was recovered, re-affixed to his shoulder, and all ended satisfactorily for the hero-gods. But their mission was not yet complete. The sons of Vukub-Cakix, Zipacna and Cabrakan, remained to be accounted for. Zipacna consented, at the entreaty of four hundred youths, incited by the hero-gods, to assist them in transporting a huge tree which was destined for the roof-tree of a house they were building. Whilst assisting them he was beguiled by them into entering a great ditch which they had dug for the purpose of destroying him, and when once he descended was overwhelmed by tree-trunks by his treacherous acquaintances, who imagined him to be slain. But he took refuge in a side-tunnel of the excavation, cut off his hair and nails for the ants to carry up to his enemies as a sign of his death, waited until the youths had become intoxicated with pulque because of joy at his supposed demise, and then, emerging from the pit, shook the house that the youths had built over his body about their heads, so that all were destroyed in its ruins. But Run-Ahpu and Xbalanque were grieved that the four hundred had perished, and laid a more efficacious trap for Zipacna. The mountain-bearer, carrying the mountains by night, sought his sustenance by day by the shore of the river, where he lived upon fish and crabs. The hero-gods constructed an artificial crab which they placed in a cavern at the bottom of a deep ravine. The hungry titan descended to the cave, which he entered on all-fours. But a neighbouring mountain had been undermined by the divine brothers, and its bulk was cast upon him. Thus at the foot of Mount Meavan perished the proud "Mountain Maker," whose corpse was turned into stone by the catastrophe. Of the family of boasters only Cabrakan remained. Discovered by the hero-gods at his favourite pastime of overturning the hills, they enticed him in an easterly direction, challenging him to overthrow a particularly high mountain. On the way they shot a bird with their blow-pipes, and poisoned it with earth. This they gave to Cabrakan to eat. After partaking of the poisoned fare his strength deserted him, and failing to move the mountain be was bound and buried by the victorious hero-gods. - Mexico, Oct. 15,1850. - Large hollowed stones used by the women for bruising maize. - The Kiché words are onomatopoetic--"holi, holi, huqi, huqi." - Zipac signifies "Cockspur," and I take the name to signify also "Thrower-up of earth." The connection is obvious. Attributions Attributions Title Image: Fachada de Placeres en el Museo Nacional de Antropología; Carlos yo, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons Images courtesy of Wikimedia Commons Boundless World History https://www.coursehero.com/study-guides/boundless-worldhistory/the-inca/ The Popul Vuh https://www.sacred-texts.com/nam/pvuheng.htm
oercommons
2025-03-18T00:35:07.761429
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87860/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/88235/overview
Aztecs Overview Aztecs The Aztecs were the key group in Central America that established trade routes and engagements of indigenous populations. The Aztecs built a city in the middle of a lake, from areas that had no land originally. The major empire was central in the importance before the Spanish arrived. Learning Objectives - Compare and contrast the Aztec peoples to the other American populations. - Analyze the development of Tenochtitlan and the impact of the city on the Aztec peoples. Key Terms / Key Concepts altepetl: Small, mostly independent city-states that often paid tribute to the Aztec capital of Tenochtitlan. Nahuatl: The language spoken by the Mexica people who made up the Aztec Triple Alliance, as well as many city-states throughout the region. flower wars: The form of ritual war where warriors from the Triple Alliance fought with enemy Nahua city-states. Mesoamerican ballgame: This ritual practice involved a rubber ball that the players hit with their elbows, knees, and hips, and tried to get through a small hoop in a special court. The Aztec People The Aztecs were a pre-Columbian Mesoamerican people of Central Mexico in the 14th, 15th, and 16th centuries. They called themselves Mexica. The Republic of Mexico and its capital, Mexico City, derive their names from the word “Mexica.” The capital of the Aztec empire was Tenochtitlan, built on a raised island in Lake Texcoco. Modern Mexico City is built on the ruins of Tenochtitlan. From the 13th century, the Valley of Mexico was the heart of Aztec civilization; here the capital of the Aztec Triple Alliance, the city of Tenochtitlan, was built upon raised islets in Lake Texcoco. The Triple Alliance was comprised of Tenochtitlan along with their main allies. They formed a tributary empire expanding its political hegemony far beyond the Valley of Mexico, conquering other city-states throughout Mesoamerica. At its pinnacle, Aztec culture had rich and complex mythological and religious traditions, and reached remarkable architectural and artistic accomplishments. In 1521 Hernán Cortés, along with a large number of Nahuatl -speaking indigenous allies, conquered Tenochtitlan and defeated the Aztec Triple Alliance under the leadership of Hueyi Tlatoani Moctezuma II. Subsequently the Spanish founded the new settlement of Mexico City on the site of the ruined Aztec capital, from where they proceeded to colonize Central America. Politics The Aztec empire was an example of an empire that ruled by indirect means. Like most European empires, it was ethnically very diverse, but unlike most European empires, it was more of a system of tribute than a single system of government. Although the form of government is often referred to as an empire, in fact most areas within the empire were organized as city-states, known as “Altepetl” in Nahuatl. These were small polities ruled by a king (tlatoani) from a legitimate dynasty. Two of the primary architects of the Aztec empire were the half-brothers Tlacaelel and Montezuma I, nephews of Itzcoatl. Moctezuma I succeeded as king in 1440. Although he was also offered the opportunity to be tlatoani, Tlacaelel preferred to operate as the power behind the throne. Tlacaelel focused on reforming the Aztec state and religious practices. According to some sources, he ordered the burning of most of the extant Aztec books, claiming that they contained lies. He thereupon rewrote the history of the Aztec people, thus creating a common awareness of history for the Aztecs. This rewriting led directly to the curriculum taught to scholars, and promoted the belief that the Aztecs were always a powerful and mythic nation—forgetting forever a possible true history of modest origins. One component of this reform was the institution of ritual war (the flower wars) as a way to have trained warriors, and the necessity of constant sacrifices to keep the Sun moving. Economics The Aztec economy can be divided into a political sector, under the control of nobles and kings, and a commercial sector that operated independently of the political sector. The political sector of the economy centered on the control of land and labor by kings and nobles. Nobles owned all land, and commoners got access to farmland and other fields through a variety of arrangements, from rental through sharecropping to serf-like labor and slavery. These payments from commoners to nobles supported both the lavish lifestyles of the high nobility and the finances of city-states. Many luxury goods were produced for consumption by nobles. The producers of featherwork, sculptures, jewelry, and other luxury items were full-time commoner specialists who worked for noble patrons. Several forms of money were in circulation, most notably the cacao bean. These beans could be used to buy food, staples, and cloth. Around thirty beans would purchase a rabbit, while one father was recorded as selling his daughter for around 200 cacao beans. The Aztec rulers also maintained complex road systems with regular stops to rest and eat every ten miles or so. Couriers walked these roads regularly to ensure they were in good working order and to bring news back to Tenochtitlan. Trade also formed a central part of Aztec life. While local commoners regularly paid tribute to the nobles a few times a year, there was also extensive trade with other regions in Mesoamerica. Archeological evidence shows that jade, obsidian, feathers, and shells reached the capital through established trade routes. Rulers and nobles enjoyed wearing these more exotic goods and having them fashioned into expressive headdresses and jewelry. Architecture and Agriculture The capital of Tenochtitlan was divided into four even sections called campans. All of these sections were interlaced together with a series of canals that allowed for easy transportation throughout the islets of Lake Texcoco. Commoner housing was usually built of reeds or wood, while noble houses and religious sites were constructed from stone. Agriculture played a large part in the economy and society of the Aztecs. They used dams to implement irrigation techniques in the valleys. They also implemented a raised bed gardening technique by layering mud and plant vegetation in the lake in order to create moist gardens. These raised beds were called chinampas. These extremely fertile beds could harvest seven different crops each year. Some of the most essential crops in Aztec agriculture included: - Avocados - Beans - Squash - Sweet Potatoes - Maize - Tomatoes - Amarinth - Chilies - Cotton - Cacao beans Most farming occurred outside of the busy heart of Tenochtitlan. However, each family generally had a garden where they could grow maize, fruits, herbs, and medicinal plants on a smaller scale. Aztec Religion The Aztec religion focused on death, rebirth, and the renewal of the sun. The Aztecs practiced ritual sacrifice, ball games, and bloodletting in order to renew the sun each day. The Aztec religious cosmology included the physical earth plane, where humans lived, the underworld (or land of the dead), and the realm of the sky. Due to the flexible imperial political structure, a large pantheon of gods was incorporated into the larger cultural religious traditions. The Aztecs also worshipped deities that were central to older Mesoamerican cultures, such as the Olmecs. Some of the most central deities that the Aztecs paid homage to included: - Huizilopochtil – The “left-handed hummingbird” god was the god of war and the sun and also the founder of Tenochtitlan. - Quetzalcoatl – The feathered serpent god that represented the morning star, wind, and life. - Tlaloc – The rain and storm god. - Mixcoatl – The “cloud serpent” god that was incorporated into Aztec belief and represented war. - Xipe Totec – The flayed god that was associated with fertility. This deity was also incorporated from cultures under the Aztec Triple Alliance umbrella. Founding Myth of Tenochtitlan Veneration of Huizilopochtil, the personification of the sun and of war, was central to the religious, social, and political practices of the Mexica people. Huizilopochtil attained this central position after the founding of Tenochtitlan and the formation of the Mexica city-state society in the 14th century. According to myth, Huizilopochtil directed the wanderers to found a city on the site where they would see an eagle devouring a snake perched on a fruit-bearing nopal cactus. (It was said that Huizilopochtil killed his nephew, Cópil, and threw his heart on the lake. Huizilopochtil honoured Cópil by causing a cactus to grow over Cópil’s heart.) This legendary vision is pictured on the coat of arms of Mexico. Ritual and Sacrifice Like all other Mesoamerican cultures, the Aztecs played a variant of the Mesoamerican ballgame. The game was played with a ball of solid rubber. The players hit the ball with their hips, knees, and elbows, and had to pass the ball through a stone ring to automatically win. The practice of the ballgame carried religious and mythological meanings and also served as sport. Many times players of the game were captured during the famous Aztec flower wars with neighboring rivals. Losers of the game were often ritually sacrificed as an homage to the gods. While human sacrifice was practiced throughout Mesoamerica, the Aztecs, if their own accounts are to be believed, brought this practice to an unprecedented level. For example, for the reconsecration of the Great Pyramid of Tenochtitlan in 1487, the Aztecs reported that they sacrificed 80,400 prisoners over the course of four days, reportedly by Ahuitzotl, the Great Speaker himself. This number, however, is not universally accepted. Accounts by the Tlaxcaltecas, the primary enemy of the Aztecs at the time of the Spanish Conquest, show that at least some of them considered it an honor to be sacrificed. In one legend, the warrior Tlahuicole was freed by the Aztecs but eventually returned of his own volition to die in ritual sacrifice. Tlaxcala also practiced the human sacrifice of captured Aztec citizens. Everyone was affected by human sacrifice, and it should be considered in the context of the religious cosmology of the Aztec people. It was considered necessary in order for the world to continue and be reborn each new day. Death and ritual blood sacrifice ensured the sun would rise again and crops would continue to grow. Not only were captives and warriors sacrificed, but nobles would often practice ritual bloodletting during certain sacred days of the year. Every level of Aztec society was affected by the belief in the human responsibility to pay homage to the gods, and anyone could serve as a sacrificial offering. Priests and Religious Architecture A noble priest class played an integral role in the religious worship and sacrifices of Aztec society. They were responsible for collecting tributes and ensuring there were enough goods for sacrificial ceremonies. They also trained young men to impersonate various deities for an entire year before being sacrificed on a specific day. These priests were respected by all of society and were also responsible for practicing ritual bloodletting on themselves at regular intervals. Priests could come from the noble or common classes, but they would receive their training at different schools and perform different functions. Priests performed rituals from special temples and religious houses. The temples were generally huge pyramidal structures that were covered over with a new surface every fifty-two years, meaning some pyramids were gigantic in scale. These feats of architectural display were the sites of large sacrificial offerings and festivals, where Spanish reports said blood would run down the steps of the pyramids. The priests often performed smaller daily rituals in small, dark temple houses where incense and images of important gods were displayed. Attributions Attributions Images courtesy of Wikimedia Commons: https://upload.wikimedia.org/wikipedia/commons/e/e0/Codex_Borgia_page_17.jpg Boundless World History https://www.coursehero.com/study-guides/boundless-worldhistory/the-toltecs-and-the-aztecs/
oercommons
2025-03-18T00:35:07.790929
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/88235/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/88236/overview
Incas Overview Incas The Inca empire was the central empire in South America. They swept the range of the Andes Mountains. The ability to establish a trade federation was important to establishing a powerful economic and political group in South America. Learning Objectives - Evaluate the differences between the different Andean populations. - Analyze the impact of environment on the different Andean populations. Key Terms / Key Concepts Huaca: A large, pyramid-like structure made of adobe bricks and used as a palace, ritual site, temple, and administrative center. vicuña: A wild South American camelid that lives in the high alpine areas of the Andes. It is a relative of the llama, and is now believed to be the wild ancestor of domesticated alpacas, which are raised for their coats. Moche: A city in modern-day Peru, which is also where the Moche culture was centered. quipus: Brightly colored knotted strings that recorded numerical information, such as taxes, goods, and labor, using the base number of 10 to record data in knots. suyus: Distinct districts of the Inca Empire that all reported back to the capital of Cusco. There were four major districts during the height of the empire. The First Peoples of the Americas The first peoples of the Americas go by many different terms: Native American, Indian, Amerindian, Indigenous. The peoples of the Americas have many different cultures and backgrounds that span the geographic diversity of the Americas. The first peoples live in deserts, islands, woodlands, tropical rainforests, swampy marshes, plains, and even in tundra. In this textbook, the first peoples of the Americas will simply be referred to as the indigenous populations, which means the first people in a region. This is not specific to the Americas, but the term fits the most comfortably with our discussion. The diversity of indigenous populations and their ways of life demonstrate how complicated the peoples of the Americas truly are and that complexity is mirrored in their origin stories. Archeologists believe that the first peoples of the Americas arrived approximately 15,000 to 13,000 years ago. Most of the archeological record points to peoples crossing from Asia, either by land bridge or boat, through the Bering Straight region of Alaska and Asia. Once entering the Western hemisphere, the peoples began their migration southward. Most archeologists agree that the first settlers of the Americas were following food and game that migrated and diversified as they moved further south. The archeological record points to migrants moving from the Alaska region and the northern Canadian Rockies to both south and east regions. The group that moved directly southward would become the group that moved the furthest south in the Americas, to the tip of the Tierra del Fuego in modern Argentina. The peoples of the Americas diversified as they migrated throughout the lands. They adapted to the environmental conditions and found a new way of life as they found their homelands. Archeologists debate the number of indigenous people in the Americas by the 1490s CE. Some estimations are as low as 100 million people, others range as high as 350 million. These estimates are difficult because of the lack of records and archeological findings from the period. Indigenous populations in the New World had a unique relationship with technology and production. In the Americas, because of limited resources and travel, this type of knowledge became very scattered and did not universally spread throughout the Americas. In many cases, indigenous groups would learn a technology or food and share it regionally, but very few of these technologies spread throughout the Americas. For example, corn production was widely known throughout the North American world. Yet, potatoes, which were a very important South American crop, did not spread beyond the region. This is due to a lack of trade resources and links north to south. For 13,000 years, as indigenous populations moved throughout what would later become the Americas different types of societies emerged. There were several different groups that held empires and civilizations in the Americas during the ancient and Middle Ages. These groups would have a special impact on the later civilizations that developed in the Late Middle Ages period. To here are three key indigenous groups in Latin American history that had the greatest impact on European colonization. The first was the Mayan, who lived in central America and developed a culture and agricultural style that was at the heart of trade in the Mesoamerican world. The second was the Aztecs of what is today Central Mexico. The third is the Inca of South America. These three groups were large, complicated empires during the Late Middle Ages and that had unique political and economic pull. Andean Peoples The Andean peoples consist of several groups that originated in the Andean world throughout the course of civilization. These include the Moche, the Nazca, andthe Inca,which are now the most famous. To build their civilization, the Inca drew from the cultural and political organizations of of the various Andes peoples. The Andres peoples take their name from the mountain chain that was their home. The Andes Mountains run north to south throughout the South American continent, as if they were the spine of the continent. They are the longest continental mountain range in the world. They stretch approximately 4350 miles and have many high plateaus and tall peaks. The Andes are the home to the mountain that is the farthest from the earth’s center and are an average height of 13,123 feet above sea level. They are approximately double the height of the Appalachian Mountains in North America and three times the size of the Alps in Europe. The height of the Andes Mountains means that there are many elevation zones where there are a variety of plants and animals can live near the equator that would not normally live in such hot regions of the world. Moche The Moche (also known as the Early Chimú or Mochica) lived in what is modern-day Peru. Their civilization lasted from approximately 100 to 800 CE. The Moche shared cultural values and social structures within a distinct geographical region. However, scholars suggest this civilization functioned as individual city-states, sharing similar cultural elite classes, rather than as an empire or a single political system. The Moche cultural sphere centered around several valleys along the north coast of Peru and occupied 250 miles of desert coastline that extended up to 50 miles inland. Moche society was agriculturally based. Because of the arid climate, they made a network of irrigation canals that diverted water into the dry region so that they could grow crops. The Moche are also noted for their expansive ceremonial architecture (Huaca), elaborately painted ceramics, and woven textiles. Moche textiles were mostly created using wool from vicuña and alpacas. Although there are few surviving examples of the original textiles, descendants of the Moche people have strong weaving traditions. There are several theories as to what caused the demise of the Moche. Some scholars have emphasized the role of environmental change. Studies of ice cores drilled from glaciers in the Andes reveal climatic events between 536 and 594 CE, possibly a super El Niño, that resulted in thirty years of intense rain and flooding followed by thirty years of drought, which is thought to be part of the aftermath of the climate changes of 535 – 536. These weather events could have disrupted the Moche way of life and shattered their faith in their religion, which had promised stable weather through sacrifices. While there is no evidence of a foreign invasion, as many scholars have suggested in the past, the defensive works suggest social unrest, possibly the result of climate change, as factions fought for control over increasingly scarce resources. The Inca The Inca Empire, or Inka Empire, was the largest empire in pre-Columbian America. The civilization emerged in the 13th century and lasted until it was conquered by the Spanish in 1572 CE. The administrative, political, and military center of the empire was located in Cusco (also spelled Cuzco) in modern-day Peru. From 1438 to 1533 CE, the Incas used a variety of methods, from conquest to peaceful assimilation, to incorporate a large portion of western South America. The Inca expanded their borders to include large parts of modern Ecuador, Peru, western and south-central Bolivia, northwest Argentina, north and north-central Chile, and southern Colombia. This vast territory was known in Quechua (the language of the Inca Empire) as Tawantin Suyu, or the Four Regions, which met in the capital of Cusco. Architecture illustrates the sophistication and technical skill typical of the Inca Empire. The main example of this resilient art form was the capital city of Cusco, which drew together the Four Regions. The Inca built their works without using adhesive to keep the walls together. This was important because they were so skillful with stone work, that they placed stones together so well that a knife could not be fitted through the stonework. This was a process first used on a large scale by the Pucara peoples to the south in Lake Titicaca (c. 300 BCE – 300 CE). The rocks used in construction were sculpted to fit together exactly by repeatedly lowering one rock onto another and carving away any sections on the lower rock where there was compression or the pieces did not fit exactly. The tight fit and the concavity on the lower rocks made them extraordinarily stable. Machu Picchu is a rare example of this architectural building technique and remains in remarkable condition after many centuries. It was built around 1450 CE, at the height of the Inca Empire, dating from the period of the two great Inca emperors. Machu Picchu was probably built as a temple for the emperor. It was abandoned just over 100 years later, in 1572, as a belated result of the Spanish Conquest, possibly related to smallpox. Textiles were one of the most precious commodities of the Inca culture; they denoted a person’s social status and often their profession. The brightly colored patterns on a wool tunic represented various positions and achievements. For example, a black-and-white checkerboard pattern topped with a pink triangle denoted a soldier. Because textiles were so specific to a person’s class and employment, citizens could not change their wardrobe without the express permission of the government. Textiles were also manufactured that could only be used for certain tasks or social arenas. A rougher textile, spun from llama wool and called awaska, was used for everyday household chores. On the other hand, a fine-spun cloth made from vicuña wool could only be used in religious ceremonies. Although textiles were considered the most precious commodity in Inca culture, Incas also considered ceramics and metalwork essential commodities of their economy and class system. Incan pottery was distinctive and normally had a spherical body with a cone-shaped base. The pottery would also include curved handles and often featured animal heads, such as jaguars or birds. These ceramics were painted in bright colors, such as orange, red, black, and yellow. The Inca required every province to mine for precious metals like tin, silver, gold, and copper. Fine silver and gold were made into intricate decorative pieces for the emperors and elites based on Chimú metallurgy traditions. The decorative pieces often included animal motifs with butterflies, jaguars, and llamas etched into the metal. Skilled metallurgists also transformed bronze and copper into farming implements, such as blades and axes, or pins for everyday activities. The Inca culture boasted a wide variety of crops, numbering around seventy different strains in total; this makes it one of the most diverse crop cultures in the world. Some of these flavorful vegetables and grains included potatoes, sweet potatoes, maize, chili peppers, cotton, tomatoes, peanuts, oca, quinoa, and amaranth. These crops were grown in the high-altitude Andes by building terraced farms that allowed farmers to utilize the mineral-rich mountain soil. The quick change in altitude on these mountain farms utilized the micro-climates of each terrace to grow a wider range of crops. The Inca also produced bounties in the Amazon rainforest and along the more arid coastline of modern-day Peru. Alongside vegetables, the Inca supplemented their diet with fish, guinea pigs, camelid meat, and wild fowl. And they fermented maize, or corn, to create the alcoholic beverage chicha. Administration of the Inca Empire Society was broken into two distinct parts. One segment was comprised of the common people, including those cultures that had been subsumed by the Inca Empire. The second group was made up of the elite of the empire, including the emperor and the kurakas, along with various other dignitaries and blood relations. The Inca Empire was a hierarchical system with the emperor, or Inca Sapa, ruling over the rest of society. Directly below the emperor, a number of religious officials and magistrates oversaw the administration of the empire. Kurakas were magistrates that served as the head of an ayliu—a clan-like family unit based on a common ancestor. These leaders mitigated between the spiritual and physical worlds. They also collected taxes, oversaw the day-to-day administration of the empire in their regions, and even chose brides for men in their communities. Some of the privileges kurakas enjoyed included exemption from taxation, the right to ride in a litter, and the freedom to practice polygamy. Education was vocationally based for commoners, while the elite received a formal spiritual education. The Inca Empire utilized a hierarchical rule of law to oversee the administration of its vast population. There was no codified legal system for people that broke with the cultural and social norms. Local inspectors reported back to the capital and the emperor and made immediate decisions regarding punishment in cases where customs were not honored. Many times these local inspectors were blood relatives of the emperor. The Incas created complex road systems. The Inca civilization was able to keep populations in line, collect taxes efficiently, and move goods, messages, and military resources across such a varied landscape because of the complex road system. Measuring about 24,800 miles long, this road system connected the regions of the empire and was the most complex and lengthy road system in South America at the time. Two main routes connected the north and the south of the empire, with many smaller branches extending to outposts in the east and west. The roads varied in width and style because the Inca leaders often utilized roads that already existed to create this powerful network. Common people could not use these official roads unless they were given permission by the government. These roads were used for relaying messages by way of chasqui, or human runners, who could run up to 150 miles a day with messages for officials. Llamas and alpacas were also used to distribute goods throughout the empire and ease trade relations. Additionally, the roads had a ritual purpose because they allowed the highest leaders of the Inca Empire to ascend into the Andes to perform religious rituals in sacred spaces, such as Machu Picchu. The Inca utilized a complex recording system to keep track of the administration of the empire. Quipus (also spelled khipus) were colorful bunches of knotted strings that recorded census data, taxes, calendrical information, military organization, and accounting information. These “talking knots” could contain anything from a few threads to around 2,000. They used the base number of 10 to record information in complex variations of knots and spaces. Trade and the movement of goods fed into what is called the vertical archipelago. This system meant that all goods produced within the empire were immediately property of the ruling elites. These elites, such as the emperor and governors, then redistributed resources across the empire as they saw fit. Taxes and goods were collected from four distinct suyus, or districts, and sent directly to the ruling emperor in Cusco. This highly organized system was most likely perfected under the emperor Pachacuti around 1460. This system also required a minimum quota of manual labor from the general population. This form of labor taxation was called mita. The populations of each district were expected to contribute to the wealth of the empire by mining, farming, or doing other manual labor that would benefit the entire empire. Precious metals, textiles, and crops were collected and redistributed using the road system that snaked across the land, from the ocean to the Andes. The Inca religious system utilized oral traditions to pass down the mythology of their Sun god, Inti. This benevolent male deity was often represented as a gold disk with large rays and a human face. Golden disks were commonly displayed at temples across the Inca Empire and were also associated with the ruling emperor, who was supposed to be a direct descendent of Inti and divinely powerful. Inti was also associated with the growth of crops and material abundance, especially in the high Andes, where the Inca centered their power. Some myths state that this benevolent entity Inti had children with Mama Killa, the Moon goddess. Inti ordered these children, named Manco Cápac and Mama Ocllo, to descend from the sky and onto Earth with a divine golden wedge. This wedge penetrated the earth, and they built the capital of Cusco and civilization on that very spot. Royalty were considered to be direct descendants of Inti and, therefore, able to act as intermediaries between the physical and spiritual realms. The high priest of Inti was called the Willaq Umu. He was often the brother or a direct blood relation of the Sapa Inca, or emperor, and was the second most powerful person in the empire. The royal family oversaw the collection of goods, spiritual festivals, and the worship of Inti. Power consolidated around the cult of the Sun, and scholars suggest that the emperor Pachacuti expanded this Sun cult to garner greater power in the 15th century. Conquered provinces were expected to dedicate a third of their resources, such as herds and crops, directly to the worship of Inti. Each province also had a temple with male and female priests worshipping the Inti cult. Becoming a priest was considered one of the most honorable positions in society. Female priests were called mamakuna, or “the chosen women,” and they wove special cloth and brewed chicha for religious festivals. The main temple in the Inca Empire, called Qurikancha, was built in Cusco. The temple housed the bodies of deceased emperors and also contained a vast array of physical representations of Inti, many of which were removed or destroyed when the Spanish arrived. Qurikancha was also the main site of the religious festival Inti Raymi, which means “Sun Festival.” It was considered the most important festival of the year and is still celebrated in Cusco on the winter solstice. It represents the mythical origin of the Inca and the hope for good crops in the coming year as the winter sun returns from darkness. Religious life was centered in the Andes near Cusco, but as the Inca Empire expanded its sphere of influence, they had to incorporate a wide array of religious customs and traditions to avoid outright revolt. Ayliu, or family clans, often worshipped very localized entities and gods. The ruling Inca often incorporated these deities into the Inti cosmos. For example, Pachamama, the Earth goddess, was a long-worshipped deity before the Inca Empire. She was incorporated into Inca culture as a lower divine entity. Similarly, the Chimú along the northern coast of Peru worshipped the Moon, rather than the Sun, probably due to the hot, arid climate and their proximity to the ocean. The Inca also incorporated the Moon into their religious myths and practices in the form of Mama Killa. The Inca believed in reincarnation. Death was a passage to the next world that was full of difficulties. The spirit of the dead, camaquen, would need to follow a long dark road. The trip required the assistance of a black dog that was able to see in the dark. Most Incas imagined the afterworld to be very similar to the Euro-American notion of heaven, with flower-covered fields and snow-capped mountains. It was important for the Inca to ensure they did not die as a result of burning or that the body of the deceased did not become incinerated. This is because of the underlying belief that a vital force would disappear and this would threaten their passage to the afterworld. Those who obeyed the Inca moral code (do not steal, do not lie, do not be lazy) went to live in the “Sun’s warmth” while others spent their eternal days “in the cold earth.” Human sacrifice has been exaggerated by myth, but it did play a role in Inca religious practices. As many as 4,000 servants, court officials, favorites, and concubines were killed upon the death of the Inca uayna Capac in 1527, for example. The Incas also performed child sacrifices during or after important events, such as the death of the Sapa Inca or during a famine. These sacrifices were known as capacocha. The Inca practiced cranial deformation. They achieved this by wrapping tight cloth straps around the heads of newborns in order to alter the shape of their soft skulls into a more conical form; this cranial deformation distinguished social classes of the communities, with only the nobility having it. Primary Source: Incan Creation Myth Incan Creation Myth (1556) Mythology holds the first truths of a culture and often may be the basis for gender roles in society. Set down in print around 1556, more than twenty years after the Spanish Conquistadors had overthrown the Inca Empire, this story was given to an Incan Princess and one of the Spanairds. Thus our imperial city came into existence, and was divided into two halves: Hanan-Cuzco, or Upper-Cuzco, and Hurin-Cuzco, or Lower-Cuzco. Hanan-Cuzco was founded by our king and Hurin-Cuzco by our queen, and that is why the two parts were given these names, without the inhabitants of one possessing any superiority over those of the other, but simply to recall the fact that certain of them had been originally brought together by the king, and certain others by the queen. There existed only one single difference between them, ... that the inhabitants of Upper-Cuzco were to be considered as the elders ... for the reason that those from above had been brought together by the male, and those below by the female element. Attributions Attributions Images courtesy of Wikimedia Commons: https://upload.wikimedia.org/wikipedia/commons/c/c7/Peru_-_Viewpoint_over_Machu_Pichu_city.jpg Boundless World History https://www.coursehero.com/study-guides/boundless-worldhistory/the-inca/ Primary Source: Incan Creation Myth https://web.archive.org/web/20000416033032/http://www.humanities.ccny.cuny.edu/history/reader/inca.htm
oercommons
2025-03-18T00:35:07.820793
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/88236/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87861/overview
Mississippian Peoples Overview Mississippian Peoples Complex cultures developed in the Mississippi River basin centuries before European contact. Learning Objectives Analyze the differences between the Toltec, Aztec, Inca, and North American indigenous groups. Key Terms / Key Concepts Maize: a grain, domesticated by indigenous peoples in Mesoamerica in prehistoric times, known in many English-speaking countries as corn Atlatl: a wooden stick with a thong or perpendicularly protruding hook on the rear end that grips a grove or socket on the butt of its accompanying spear three sisters: corn, squash, and beans, which were the three most important crops for Mississippian cultures Mounds: formations made of earth that were used as foundations for Mississippian culture structures Mississippian Peoples Centuries before the arrival of European explorers and colonists, complex cultures arose in several regions of North America: the Mississippi River basin of the Midwest and Southeast and the American Southwest; the Mound builder cultures along the Mississippi River basin; and the Pueblo cultures in the Southwest. None of these cultures left behind any written records; consequently, knowledge of them is based on archaeological study. The artifacts, which these cultures produced, however, enable labeling them as complex. The various Mound builder cultures flourished at different times and regions in areas watered by the Mississippi and its tributaries in the Eastern Woodlands region. The Adena and Hopewell cultures inhabited the upper and lower Ohio River valley, respectively, for approximately 2000 years between 1000 BCE and 1000 CE. The introduction of Indian corn or maize from Mexico made possible the growth of these cultures, thereby making possible an agricultural surplus. Maize is rich in carbohydrates as well vitamins and minerals; therefore, its inclusion into the diet helped to sustain a healthy and growing population. The construction of massive earthworks or mounds in the shape of animals or humans (effigies) by these peoples not only provided archaeologists with a nickname for them but is also evidence that they possessed a formal government or state, which must have overseen the building of these monumental public works. These mounds were probably ceremonial centers for religious rites, which were conducted by a specialized class of priests. Elaborate burial sites indicate that these societies possessed a ruling elite. The elite was often buried with grave goods, which reflected their high social standing. The artifacts recovered from these burials, such as jewelry and pottery, show a degree of workmanship that would suggest a class of specialized craft workers. Such artifacts also indicate the existence of long-distance trade and, therefore, a specialized class of traders as well. Based on this knowledge, the Adena and Hopewell cultures possessed traits characteristic of a complex culture: government, social stratification, and specialization. Eastern Woodland Culture Eastern Woodland Culture refers to the way of life of indigenous peoples in the eastern part of North America between 1,000 BCE and 1,000 CE. The Eastern Woodland cultural region extended from what is now southeastern Canada, through the eastern United States, down to the Gulf of Mexico. The time in which the peoples of this region flourished is referred to as the Woodland Period. This period is known for its continuous development in stone and bone tools, leather crafting, textile manufacture, cultivation, and shelter construction. Many Woodland hunters used spears and atlatl until the end of the period, when those were replaced by bows and arrows. The Southeastern Woodland hunters also used blowguns. The major technological and cultural advancements during this period included the widespread use of pottery and the increasing sophistication of its forms and decoration. The growing use of agriculture and the development of the Eastern Agricultural Complex also meant that the nomadic nature of many of the groups was supplanted by permanently occupied villages. Early Woodland Period (1000 – 1 BCE) The archaeological record suggests that humans in the Eastern Woodlands of North America were collecting plants from the wild by 6,000 BCE and gradually modifying them by selective collection and cultivation. In fact, the eastern United States is one of 10 regions in the world to become an independent center of agricultural origin. Research also indicates that the first appearance of ceramics occurred around 2,500 BCE in parts of Florida and Georgia. What differentiates the Early Woodland period from the earlier periods is the appearance of permanent settlements, elaborate burial practices, intensive collection and horticulture of starchy seed plants, and differentiation in social organization. Most of these were evident in the southeastern United States by 1,000 BCE with the Adena culture, which is the best-known example of an early Woodland culture. The Adena culture was centered around what is present-day Ohio and surrounding states and was most likely a number of related indigenous American societies that shared burial complexes and ceremonial systems. Adena mounds generally ranged in size from 20 to 300 feet in diameter and served as burial structures, ceremonial sites, historical markers, and possibly even gathering places. The mounds provided a fixed geographical reference point for the scattered populations of people dispersed in small settlements of one to two structures. A typical Adena house was built in a circular form, 15 to 45 feet in diameter. Walls were made of paired posts tilted outward that were then joined to other pieces of wood to form a cone-shaped roof. The roof was covered with bark, and the walls were bark and/or wickerwork. While the burial mounds created by Woodland culture peoples were beautiful artistic achievements, Adena artists were also prolific in creating smaller, more personal pieces of art using copper and shells. Art motifs that became important to many later indigenous Americans began with the Adena. Examples of these motifs include the weeping eye and the cross and circle design. Many works of art revolved around shamanic practices and the transformation of humans into animals, especially birds, wolves, bears, and deer, indicating a belief that objects depicting certain animals could impart those animals’ qualities to the wearer or holder. Middle Woodland Period (1 – 500 CE) The beginning of this period saw a shift of settlement to the interior. As the Woodland period progressed, local and inter-regional trade of exotic materials greatly increased to the point where a trade network covered most of the eastern United States. Ceramics during this time were thinner, of better quality, and more decorated than in earlier times. This ceramic phase saw a trend towards round-bodied pottery and lines of decoration with cross-etching on the rims. Throughout the Southeast and north of the Ohio River, burial mounds of important people were very elaborate and contained a variety of mortuary gifts, many of which were not local. The most archaeologically certifiable sites of burial during this time were in Illinois and Ohio. These have come to be known as the Hopewell tradition. The Hopewellian peoples had leaders, but they were not powerful rulers who could command armies of soldiers or slaves. It has been posited that these cultures accorded certain families with special privileges and that these societies were marked by the emergence of “big-men,” or leaders, who were able to acquire positions of power through their ability to persuade others to agree with them on matters of trade and religion. It is also likely these rulers gained influence through the creation of reciprocal obligations with other important community members. Regardless of their path to power, the emergence of big-men marked another step toward the development of the highly structured and stratified sociopolitical organization called the chiefdom, which would characterize later American Indigenous tribes. Due to the similarity of earthworks and burial goods, researchers assume a common body of religious practice and cultural interaction existed throughout the entire region (referred to as the Hopewellian Interaction Sphere). Such similarities could also be the result of reciprocal trade, obligations, or both between local clans that controlled specific territories. Clan heads were buried along with goods received from their trading partners to symbolize the relationships they had established. Although many of the Middle Woodland cultures are called Hopewellian, and groups shared ceremonial practices, archaeologists have identified the development of distinctly separate cultures during the Middle Woodland period. Examples include the Armstrong culture, Copena culture, Crab Orchard culture, Fourche Maline culture, the Goodall Focus, the Havana Hopewell culture, the Kansas City Hopewell, the Marksville culture, and the Swift Creek culture. Late Woodland Period (500 – 1000 CE) The late Woodland period was a time of apparent population dispersal. In most areas, construction of burial mounds decreased drastically, as did long distance trade in exotic materials. Bow and arrow technology gradually overtook the use of the spear and atlatl, and agricultural production of the “three sisters” (maize, beans, and squash) was introduced. While full scale intensive agriculture did not begin until the following Mississippian period, the beginning of serious cultivation greatly supplemented the gathering of plants. Late Woodland settlements became more numerous, but the size of each one was generally smaller than their Middle Woodland counterparts. It has been theorized that populations increased so much that trade alone could no longer support the communities and some clans resorted to raiding others for resources. Alternatively, the efficiency of bows and arrows in hunting may have decimated the large game animals, forcing tribes to break apart into smaller clans to better use local resources, thus limiting the trade potential of each group. A third possibility is that a colder climate may have affected food yields, also limiting trade possibilities. Lastly, it may be that agricultural technology became sophisticated enough that crop variation between clans lessened, thereby decreasing the need for trade. In practice, many regions of the Eastern Woodlands adopted the full Mississippian culture much later than 1,000 CE. Some groups in the North and Northeast of the United States, such as the Iroquois, retained a way of life that was technologically identical to the Late Woodland until the arrival of the Europeans. Furthermore, despite the widespread adoption of the bow and arrow, indigenous peoples in areas near the mouth of the Mississippi River, for example, appear to have never made the change. Mississippian Culture Mississippian cultures lived in the modern-day United States in the Mississippi valley from 800 to 1540. The Mississippian Period lasted from approximately 800 to 1540 CE. It’s called “Mississippian” because it began in the middle Mississippi River valley, between St. Louis and Vicksburg. However, there were other Mississippians as the culture spread across modern-day US. There were large Mississippian centers in Missouri, Ohio, and Oklahoma. A number of cultural traits are recognized as being characteristic of the Mississippians. Although not all Mississippian peoples practiced all of the following activities, they were distinct from their ancestors in adoption of some or all of the following traits: - The construction of large, truncated earthwork pyramid mounds, or platform mounds. Such mounds were usually square, rectangular, or occasionally circular. Structures (domestic houses, temples, burial buildings, or other) were usually constructed atop such mounds. - A maize-based agriculture. In most places, the development of Mississippian culture coincided with adoption of comparatively large-scale, intensive maize agriculture, which supported larger populations and craft specialization. - The adoption and use of riverine (or more rarely marine) shells as tempering agents in their shell-tempered pottery. - Widespread trade networks extending as far west as the Rockies, north to the Great Lakes, south to the Gulf of Mexico, and east to the Atlantic Ocean. - The development of the chiefdom or complex chiefdom level of social complexity. - A centralization of control through combined political and religious power in the hands of few or one. - The beginnings of a settlement hierarchy, in which one major center (with mounds) has clear influence or control over a number of lesser communities, which may or may not possess a smaller number of mounds. - The adoption of the paraphernalia of the Southeastern Ceremonial Complex (SECC), also called the Southern Cult. This is the belief system of the Mississippians as we know it. SECC items are found in Mississippian-culture sites from Wisconsin to the Gulf Coast, and from Florida to Arkansas and Oklahoma. The SECC was frequently associated with ritual game-playing. Although hunting and gathering plants for food was still important, the Mississippians were mainly farmers. They grew corn, beans, and squash, called the “three sisters” by historic Southeastern Indians. The “sisters” provided a stable and balanced diet, making a larger population possible. Large scale agricultural production made it possible for thousands of people to live in some larger towns and cities, such as at the site of Cahokia, near the modern city of St. Louis, Missouri. A typical Mississipian town was built near a river or creek. It covered about ten acres of ground, and was surrounded by a palisade—a fence made of wooden poles placed upright in the ground. A typical Mississippian house was rectangular, about 12 feet long and 10 feet wide. The walls of a house were built by placing wooden poles upright in a trench in the ground. The poles were then covered with a woven cane matting. The cane matting was then covered with plaster made from mud. This plastered cane matting is called “wattle and daub.” The roof of the house was made from a steep “A” shaped framework of wooden poles covered with grass woven into a tight thatch. Mississippian cultures, like many before them, built mounds. Though other cultures may have used mounds for different purposes, Mississippian cultures typically built structures on top of them. The type of structures constructed ran the gamut: temples, houses, and burial buildings. The Nashville area in Tennessee was a major population center during this period. There were once many temple and burial mounds in Nashville, especially along the Cumberland River. Thousands of Mississippian-era graves have been found in the city, and thousands more may exist in the surrounding area. Mississippian artists produced unique art works. They engraved shell pendants with animal and human figures, as well as carved ceremonial objects out of flint. They sculpted human figures and other objects in stone. Potters molded their clay into many shapes, sometimes decorating them with painted designs. Decline of the Mississippians Hernando de Soto was a Spanish explorer who, from 1539 – 43, lived with and spoke to many Mississippian cultures. After his contact, their cultures were relatively unaffected directly by Europeans, though they were indirectly. Since the natives lacked immunity to new infectious diseases, such as measles and smallpox, epidemics induced by contact with the Europeans caused so many fatalities that they undermined the social order of many chiefdoms. Some groups adopted European horses and changed to nomadism. Political structures collapsed in many places. By the time more historical accounts were being written, the Mississippian way of life had changed irrevocably. Some groups maintained an oral tradition link to their mound-building past, such as the late 19th-century Cherokee. Other Indigenous American groups, having migrated many hundreds of miles and lost their elders to diseases, did not know their ancestors had built the mounds dotting the landscape. This contributed to the myth of the Mound Builders as a people distinct from indigenous Americans. Mississippian peoples were almost certainly ancestral to the majority of the indigenous American nations living in this region in the historic era. The historic and modern-day nations believed to have descended from the overarching Mississippian Culture include: Alabama, Apalachee, Caddo, Cherokee, Chickasaw, Choctaw, Muscogee Creek, Guale, Hitchiti, Houma, Kansa, Missouria, Mobilian, Natchez, Osage, Quapaw, Seminole, Tunica-Biloxi, Yamasee, and Yuchi. Attributions Title Image https://commons.wikimedia.org/wiki/File:Mississippian_Figure_MET_DP261003.jpg Mississippian culture; Male figure; Stone Sculpture. Discovered at the Link Farm Site located at the confluence of the Duck and Buffalo Rivers in Humphreys County, Tennessee, as part of of a paired male and female set of statues nicknamed "Adam" and "Eve" by the discoverers Metropolitan Museum of Art, CC0, via Wikimedia Commons Adapted from: https://courses.lumenlearning.com/boundless-worldhistory/chapter/native-american-cultures-in-north-america/ https://creativecommons.org/licenses/by-sa/4.0/
oercommons
2025-03-18T00:35:07.850839
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87861/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87862/overview
Pueblo Peoples Overview Pueblo Peoples In the southwest region of the United States and northern Mexico, remarkable complex cultures arose in an arid, semi-desert region. Learning Objectives Analyze the differences between the Toltec, Aztec, Inca, and North American indigenous groups Key Terms / Key Concepts Animism: the worldview that non-human entities—such as animals, plants, and inanimate objects—possess a spiritual essence Sandstone: a sedimentary rock produced by the consolidation and compaction of sand, cemented with clay Irrigation: the act or process of irrigating, or the state of being irrigated; especially, the operation of causing water to flow over lands for the purpose of nourishing plants Shamanism: a practice that involves a practitioner reaching altered states of consciousness in order to perceive and interact with a spirit world and channel transcendental energies into this world Pueblo Peoples In the American southwest a number of different complex cultures emerged, beginning around 400 BCE, whose inhabitants were later known as the "Anasazi" or "Ancient Ones" to the Navajo—an indigenous American tribe from this region during the historical period. These cultures constructed massive, multi-room mudbrick (adobe) structures known as pueblos and raised maize and other crops in this arid region through the large-scale construction of reservoirs and irrigation works. Such public projects suggest a formal system of government, indicative of a complex culture. Around 1000 CE at Chaco Canyon in the San Juan Basin of northern New Mexico, a series of impressive roads connected walled compounds which consisted of pueblos. In southern Arizona near Phoenix, the Hohokam culture also built impressive pueblos around 1300 AD. These people also constructed ceremonial ball courts similar to those of Mexico and Central America. By the end of the fifteenth century, however, the construction of such large pueblos in this region had ceased. Southwestern Culture Environmental changes allowed for many cultural traditions to flourish and develop similar social structures and religious beliefs. The greater Southwest has long been occupied by hunter-gatherers and agricultural settlements. This area, comprised of modern-day Colorado, Arizona, New Mexico, Utah, and Nevada—and the states of Sonora and Chihuahua in northern Mexico, has seen successive prehistoric cultural traditions since approximately 12,000 years ago. As various cultures developed over time, many of them shared similarities in family structure and religious beliefs. Southwestern farmers probably began experimenting with agriculture by facilitating the growth of wild grains, such as amaranth and chenopods, and gourds for their edible seeds and shells. The earliest maize known to have been grown in the Southwest was a popcorn varietal measuring one to two inches long. It was not a very productive crop. More productive varieties were developed later by Southwestern farmers or introduced via Mesoamerica, though the drought-resistant tepary bean was native to the region. Cotton has been found at archaeological sites dating to about 1200 BCE in the Tucson basin in Arizona and was most likely cultivated by indigenous peoples in the region. Evidence of tobacco use and possibly the cultivation of tobacco, dates back to approximately the same time period. Agave, especially agave murpheyi, was a major food source of the Hohokam and grown on dry hillsides where other crops would not grow. Early farmers also possibly cultivated cactus fruit, mesquite bean, and species of wild grasses for their edible seeds. Extensive irrigation systems were developed and were among the largest of the ancient world. Elaborate adobe and sandstone buildings were constructed, and highly ornamental and artistic pottery was created. The unusual weather conditions could not continue forever, however, and gave way in time to the more common arid conditions of the area. These dry conditions necessitated a more minimal way of life and, eventually, the elaborate accomplishments of these cultures were abandoned. The two major prehistoric archaeological culture areas were in the American Southwest and northern Mexico. These cultures, sometimes referred to as Oasis America, are characterized by dependence on agriculture, formal social stratification, population clusters, and major architecture. One of the major cultures that developed during this time was the Pueblo peoples, formerly referred to as the Anasazi. Their distinctive pottery and dwelling construction styles emerged in the area around 750 CE. Ancestral Pueblo peoples are renowned for the construction of and cultural achievement present at Pueblo Bonito and other sites in Chaco Canyon, as well as Mesa Verde, Aztec Ruins, and Salmon Ruins. Other cultural traditions that developed during this time include the Hohokam and Mogollon traditions. Family and Religion Paleolithic peoples in the Southwest initially structured their families and communities into highly mobile traveling groups of approximately 20 to 50 members, moving place to place when resources were depleted and additional supplies were needed. As cultural traditions began to evolve throughout the Southwest between 7500 BCE to 1550 CE, many cultures developed similar social and religious traditions. For the Pueblos and other Southwest American Indian communities, the transition from a hunting-gathering, nomadic experience to more permanent agricultural settlements meant more firmly established families and communities. Climate change that occurred about 3,500 years ago during the Archaic period, however, changed patterns in water sources, dramatically decreasing the population of indigenous peoples. Many family-based groups took shelter in caves and rock overhangs within canyon walls, many of which faced south to capitalize on warmth from the sun during the winter. Occasionally, these peoples lived in small, semi-sedentary hamlets in open areas. Many Southwest tribes during the Post-Archaic period lived in a range of structures that included small family pit houses, larger structures to house clans, grand pueblos, and cliff-sited dwellings for defense. These communities developed complex networks that stretched across the Colorado Plateau, linking hundreds of neighborhoods and population centers. While southwestern tribes developed more permanent family structures and established complex communities, they also developed and shared a similar understanding of the spiritual and natural world. Many of the tribes that made up the Southwest Culture practiced animism and shamanism. Shamanism encompasses the premise that shamans are intermediaries or messengers between the human world and the spirit worlds. At the same time, animism encompasses the beliefs that there is no separation between the spiritual and physical (or material) world. Additionally, animism includes the belief that souls or spirits exist not only in humans but also in some other animals, plants, rocks, and geographic features such as mountains or rivers, or other entities of the natural environment, including thunder, wind, and shadows. Although at present there are a variety of contemporary cultural traditions that exist in the greater Southwest, many of these traditions still incorporate religious aspects that are found in animism and shamanism. Some of these cultures include the Yuman-speaking peoples inhabiting the Colorado River valley, the uplands, and Baja California; the O’odham peoples of southern Arizona and northern Sonora; and the Pueblo peoples of Arizona and New Mexico. Attributions Title Image https://commons.wikimedia.org/wiki/File:Chaco_Canyon-Chetro_Ketl-14-Kivas-1982-gje.jpg Ruins at Chaco Canyon - Gerd Eichmann, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons Adapted from: https://courses.lumenlearning.com/boundless-worldhistory/chapter/native-american-cultures-in-north-america/ https://creativecommons.org/licenses/by-sa/4.0/
oercommons
2025-03-18T00:35:07.872948
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87862/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87863/overview
The Historical Background of the Reformation Overview The Historical Background of the Reformation Europe by 1500 was ripe for sweeping religious reforms at the close of the Medieval period and the dawn of the modern era. Learning Objectives - Identify the primary factors from the late medieval period that led to the Reformation. Key Terms / Key Concepts Black Plague: an epidemic disease carried by fleas that killed millions across Europe beginning in the 14th century Little Ice Age: a period of cold and long winters that impacted the world from the mid-13th century through mid-15th century Humanists: Renaissance artists and writers who collected and modeled their works on the artwork and writings of ancient Greek and Roman artists and writers Europe in 1500: General Overview The Reformation developed as a religious movement in a period of intense social anxiety in Europe due to rapid social and economic change. In this time of stress and worry, people often turned to their faith for comfort and support. One cause of anxiety was the fear of a horrific death from disease. The Black Plague or Bubonic Plague first struck Europe in 1348, but this disease continued to strike down the population periodically in the centuries that followed. Infestations of fleas on rats and livestock carried this disease from Central Asia and across Europe. By 1500 and over the next century, another source of anxiety was economic and social insecurity. Beginning around 1450, Europe began experiencing a warming climate, which lasted until the end of the 16th century. This period of rising temperatures brought temporary relief to Europeans, who had been living in a Little Ice Age prior to 1450. The rising temperatures of the period 1450 to 1600 resulted in longer growing seasons for crops—such as wheat and larger harvests, which resulted in population growth. People with access to a plentiful supply of food were well nourished and less likely to die from hunger or epidemic diseases, which is more likely to affect people who are malnourished. As the population grew, demand for goods also expanded with the corresponding growth of markets. The introduction of vast amounts of precious metals (silver and gold) into Europe in the later 16th century, after the Spanish conquest of the Aztec and Inca Empires in the Western Hemisphere, accelerated the growth of the market economy. People with coins of silver and gold in hand were ready and eager to purchase goods on the market. This economic expansion allowed inventive entrepreneurs the opportunity to amass large fortunes. The acquisition of such fortunes, however, stirred up social unrest and economic fears when those possessing this newfound wealth were non-noble commoners. Less successful commoners and the elite nobility viewed these entrepreneurs as greedy, pretentious men, who had forgotten their proper place in the hierarchical social order, which traditionally drew a sharp distinction between the aristocratic landowning elite and the mass of common people. Society did not expect commoners to serve in positions of leadership or to engage in elite activities such as wearing fine clothes or riding in a horse drawn carriage or entertaining guests with a feast of meat and wine. These were positions and activities reserved for the elite nobility. Non-nobles with money who engaged in such activities were a dire threat to the social order. It is perhaps not a coincidence that the leading Protestant reformers, Martin Luther, Ulrich Zwingli, and John Calvin were commoners. Luther’s father was a miner, Zwingli’s was a peasant, and Calvin’s was an account clerk. The parents of all these reformers sacrificed and saved to send their children to a university, where they could receive an education and afterwards raise their social status through service in the church or government. In contrast, one of the leading Catholic reformers, Ignatius Loyola, the founder of the Jesuit Order, was a Spanish nobleman who was a soldier and knight prior to entering the ministry. The Roman Catholic Church in 1500 As people across Europe turned to their faith in this period of anxiety, they were often disappointed with the leadership of the Roman Catholic Church. During the preceding Medieval period and especially after 1000 CE, the Catholic Church in Western and Central Europe had developed into a centralized, hierarchical organization under the leadership of the Popes based in Rome. The Pope was Christ's representative or vicar on earth, and the Pope's power in the church was unchecked. According to the doctrines of the church, the Pope also possessed authority over all the Christian rulers of Christendom, the body of all Christians. In the 15th century, the Pope was not merely a spiritual leader, but he also ruled directly over a large section of Central Italy known as the Papal States. These Popes, therefore, often obtained their position because of their administrative and political skills, and not necessarily due to their spiritual gifts and piety. The Popes of this period were often very worldly men and far from model Christians. For example, Rodrigo Borgia or Pope Alexander VI (r. 1492 – 1503) was infamous for his elaborate, wild parties and for his beautiful mistress. He also used his power as Pope to advance the interests of his family through his efforts to carve out a principality for his illegitimate son, the ruthless and violent Cesare Borgia. The conduct and worldly reputation of such Popes shocked and disgusted Christians across Europe, who wanted to reform the church and end its corruption. The Italian Renaissance The Italian Renaissance provided reform-minded Christians with the tools to demand church reform. The Renaissance was a "rebirth" of the art and literature of Classical Greece and Rome, which arose originally in the Italian city-states of North Italy in the 14th and 15th century. Renaissance artists and writers modeled their works on the artwork and writings of ancient Greek and Roman artists and writers. Renaissance Humanists collected ancient works of art and manuscripts as sources of inspiration. Around 1350, the Italian poet, Petrarch first began to search out and assemble ancient texts and became the "Father of the Humanists." The conquest of the Byzantine Empire in 1453 by the Ottoman Turks resulted in the migration of Byzantine scholars to Italy, who brought ancient Greek manuscripts with them. 1453 was also the year in which the first printed book, the Gutenberg Bible was published in Germany. The invention of the printing press allowed for the mass publication and circulation of the ancient works that Humanists had collected. Humanists not only collected ancient manuscripts; they also maintained that the study of ancient Greek and Roman mathematics, philosophy, rhetoric, and poetry would promote human excellence and virtue. The advancement of literacy and education that resulted from the Renaissance provided Europeans who wanted to reform the church with the intellectual skills to question the doctrines and practices of the Catholic Church. Moreover, due to the work of the Humanists the works of early Christian writers who wrote in ancient Latin and Greek were now in circulation. As more educated Christians read the works of these early Christian writers, such as Saint Augustine (c. 400 CE), they contrasted the corrupt church of their own day with the ancient Christian Church, which provided a model for church reform. The Reformation and Society The historical study of the Reformation often focuses on the leading thinkers of this era and the ideas that they espoused, but in this period millions of Europeans, both men and women, either enthusiastically embraced the church reforms advocated by Protestants or passionately defended the traditions of the Roman Catholic Church. The spread of reform would not have been possible without the work of many women, who have remained largely anonymous. For example, Katharina von Bora, was a former nun, who married the Protestant German reformer, Martin Luther. She operated a farm and a brewery to support her husband’s work as a teacher and author, while also running a hospital. On the other hand, Teresa of Avila was a Spanish nun and noblewomen of the 16th century who inspired Roman Catholics with her writings on prayer and mystical faith during the Catholic Reformation. The ideas of Protestant reformers and loyalty to the Roman Catholic Church also appealed to large segments of the population across different parts of Europe who were experiencing economic hardship. For example, Martin Luther’s stress on the equality of all Christians before God caused peasant farmers across Germany to wonder why their aristocratic landlords controlled their local churches and imposed heavy rents and fees on them. In 1524, peasants across Germany revolted against their landlords in the Peasant's Rebellion. By 1525, the rebellion was over after aristocratic armies massacred over 30,000 men, women, and children. Luther condemned the rebels; equality among Christians, according to Luther, was a spiritual state only and impossible in a sinful, material world. John of Leiden in the Netherlands was a tailor, who became an Anabaptist travelling preacher. In 1532, he began preaching in the German city of Munster. He convinced the city's poor residents to expel the Roman Catholic Bishop from the city. John became the new leader of the city and demanded that the city residents all share their wealth with one another equally. He also rejected the idea of traditional marriage and insisted that all residents were all married to one another in common. Eventually John declared that he himself was Jesus Christ. He had a special gold crown made for himself and he demanded the people worship him as God. In 1534, the exiled Bishop raised an army and overthrew John's regime in Munster. The appeal of Roman Catholicism and the Protestant churches varied from region to region, often depending on the culture of a region. In the Netherlands, for example, the Dutch speaking, rural areas embraced the Protestant faith; whereas, the more city-based Flemish and Walloon speaking areas remained faithful to the Roman Catholic Church. Northern Germany with its large trading cities converted to the Protestant faith; whereas, rural, Southern Germany remained Roman Catholic. Paris, the royal French capital, was a Roman Catholic stronghold, while large areas in the south of France converted to Protestantism. In Poland, the German-speaking residents of the cities were Protestants, but the ethnic Poles in the countryside were Roman Catholic. While the British Isles became mostly a Protestant region, pockets remained loyal to the Roman Catholic Church: northern England, the Scottish Highlands, and most of Ireland. Attributions Title Image "Dance of Death" Schedel, Hartmann, 1493 - https://creativecommons.org/licenses/by/4.0
oercommons
2025-03-18T00:35:07.894815
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87863/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87864/overview
Witch Hunts Overview The 1500-1600s saw a surge in witch hunts across Western Europe. Reasons for the upswing vary but most scholars point to the clash of cultures between the Catholic Church, which dominated daily life in Western Europe, and the rebirth of intellectualism and individualism of the Renaissance. Learning Objectives - Analyze why witch-hunts began in Europe in the 16th century. - Evaluate the impact of witch hunts on European society in the 16th century. - Evalute the similarities and differences between witch trials in Western Europe and Russia. Key Terms / Key Concepts Johannes Kepler: German mathematician, astronomer, astrologer, and key figure in the 17th-century scientific revolution Malleus Maleficarum: 1487 book written by Jakob Sprenger and Heinrich Kramer about how to idenify and try witches Witch Hunts in Context Examine the image above, the burning question should be, what drove this society to do these acts to individuals in the 16th to 18th centuries? It would have been very difficult to watch a person slowly burn alive, all the while a crowd was onlooking and in some cases cheering. This was a member of the community, someone that had a standing and was known by people. This scene would have been very difficult to both watch and participate in. So the question is, what drove people in the 16th to 18th centuries to burn members of their communities at the stake? Individual cases might vary, but there was a deeper social and political change that was happening during this time period. Often in modern history, it becomes a short hand to refer to witches as short hand for a society that was suspicious of itself and willing to hurt or endanger the members of its community. The 16th century becomes one of the clearest examples of the divisions within a society that was willing to hurt themselves, but this isn’t the only example. In the 20th century, deep political divisions in the United States created McCarthyism that has been often cited as a witch hunt as well. Other sociologists and historians reference the early 21st century’s fear of Islam in the United States as another case of witch hunts. These cases of witch hunts and trials demonstrate that there is a deep fear of something within a society and that people are attempting to make sense of their world while having an extreme amount of anxiety and fear of an imagined other. The attempt to make sense of this changing world is important because oftentimes individuals are attempting to relieve their own pressure and make sense, but because of the complications of their situation, the only solution they arrive at is to hurt an innocent victim in the process. This is very important to note about the 16th century, because while witchcraft might have been in use, there was a deep-seated fear and anxiety of a changing world. It is important to put together the fundamental changes that the 16th century brought forward and how that would have impacted the European mindset and society. Take for example the rise of Protestantism. Previously in European history, there were clear rules about why the king of a country was the king, simply because God told the Pope that an individual should be king. This was straightforward and easy to understand. Yet, Protestantism changed this model. With Luther asking the question, why is the Pope in charge, should there even be a Pope? That question had a secular consequence. Now individuals had a very difficult question to ask, why was their king? If you were a Protestant, do you follow a Catholic king? Why? If you are a Catholic, do you follow a Protestant governor? This type of questioning meant that individuals had a very deep problem of trusting their neighbors or their communities. Before most communities of Europe were unified under the banner of Catholicism. With the introduction of Protestantism, how can you trust your neighbor if they are of a different religion than you? The wars and distrust of Catholics and Protestants would eventually come to a boil in the Thirty Years War and other religious conflicts of the 17th century. Yet, it is here that it is important to put the idea of witchcraft. How can you trust your neighbor if they are of a different religion? How can you explain the world now, if things are different inside of your community if something goes wrong? On a local level, this becomes more in focus of the individual. If you were a farmer, and your crops failed, but your neighbor’s crops didn’t. Why? Of course there are complex scientific reasons for this, but people of the 16th century were pre-scientific revolution and did not have the ideas to support those points. Individuals would look at one another and understand that there was a reason, but they did not have the scientific underpinning to explain, instead there was distrust and feelings of negativity towards their neighbors. As with every community, there is complex social engagements of reputation and standing inside the community. This would have also had a part to play with why individuals would have made charges against one another. Individuals seeking opportunities would accuse others who had lands that would then be up for sale for very cheap prices. The result of the witch trials of the 15th to 18th centuries casts a long legacy that should be examined and mined, instead of a moment in time that has no relationship to society, but instead, as a window into what happens when a society is at a fracturing point. These moments have happened in the past and demonstrate key moments of social breakdowns and potentially the ability to avoid those in the future. The witch trials in the early modern period were a series of witch hunts between the 15th and 18th centuries when across early modern Europe, and to some extent in the European colonies in North America, there was a widespread hysteria that malevolent Satanic witches were operating as an organized threat to Christendom. Those accused of witchcraft were portrayed as being worshippers of the Devil, who engaged in sorcery at meetings known as Witches’ Sabbaths. Many people were subsequently accused of being witches and were put on trial for the crime, with varying punishments being applicable in different regions and at different times. In early modern European tradition, witches were stereotypically, though not exclusively, women. European pagan belief in witchcraft was associated with the goddess Diana and dismissed as “diabolical fantasies” by medieval Christian authors. Background to the Witch Trials During the medieval period, there was widespread belief in magic across Christian Europe. The medieval Roman Catholic Church, which then dominated a large swath of Western Europe, divided magic into two forms—natural magic, which was acceptable because it was viewed as merely taking note of the powers in nature that were created by God, and demonic magic, which was frowned upon and associated with demonology. It was also during the medieval period that the concept of Satan, the Biblical Devil, began to develop into a more threatening form. Around the year 1000, when there were increasing fears that the end of the world would soon come in Christendom, the idea of the Devil had become prominent. In the 14th and 15th centuries, the concept of the witch in Christendom underwent a relatively radical change. No longer were witches viewed as sorcerers who had been deceived by the Devil into practicing magic that went against the powers of God. Instead they became all-out malevolent Devil-worshippers, who had made pacts with him in which they had to renounce Christianity and devote themselves to Satanism. As a part of this, it was believed that they gained new, supernatural powers that enabled them to work magic, which they would use against Christians. Why? Why did Western Europe suddenly experience a radical shift in how it discussed and evaluated witches? Why, in short, did they occupy so much attention in Western Europe during the fifteenth and sixteenth centuries? Historians frequently cite a major turning point in Western Europe: the arrival and spread of the European Renaissance. While the Renaissance did not outright denounce or diminish the teachings of the Catholic Church, it did promote questioning, new lines of thinking, and a break with roughly one thousand years of feudalism--a social structure in which the power and influence of the church over most of Western Europe was certain. By the 1500s, the Renaissance had spread so far that the Catholic Church feared its grip on Western Europe was sliding. Thus, they supported, if not outright initiated, public hunts for witches and sorcerers--people who were often social outcasts, misfits, irreligious, or pagan. This quest to identify and remove witches received a public boost in support in the late 1400s when German authors Jakob Sprenger and Heinrich Kramer released their book, Malleus Maleficarum (The Hammer of the Witches). The book openly stated three cardinal ideas: 1): witches exist; 2): they can be identified by certain behaviors; and 3): even a common man can identify them. It became enormously popular overnight. Part of the popularity arose from its association with the Catholic Church itself. Sprenger and Kramer were both sanctioned inquisitors of the Catholic Church--men who were "experts" in identifying and trying witches. Proof of this status was in the front of their book where a papal bull, signed by Pope Innocent VIII, appeared. Both authors used the papal bull as evidence of the Pope's support of their book, probably inaccurately. Nevertheless, the book helped set in motion a horrible witch-hunt among friends and neighbors. While the witch trials only really began in the 15th century, with the start of the early modern period, many of their causes had been developing during the previous centuries, with the persecution of heresy by the medieval Inquisition during the late 12th and the 13th centuries, and during the late medieval period, during which the idea of witchcraft or sorcery gradually changed and adapted. An important turning point was the Black Death of 1348–1350, which killed a large percentage of the European population, and which many Christians believed had been caused by evil forces. Beginnings of the Witch Trials While the idea of witchcraft began to mingle with the persecution of heretics even in the 14th century, the beginning of the witch hunts as a phenomenon in its own right became apparent during the first half of the 15th century in southeastern France and western Switzerland, in communities of the Western Alps. While early trials fall still within the late medieval period, the peak of the witch hunt was during the period of the European wars of religion, between about 1580 and 1630. Over the entire duration of the phenomenon of some three centuries, an estimated total of 40,000 to 100,000 people were executed. The Trials of 1580 – 1630 The height of the European witch trials was between 1560 and 1630, with the large hunts first beginning in 1609. The Witch Trials of Trier in Germany was perhaps the biggest witch trial in European history. The persecutions started in the diocese of Trier in 1581 and reached the city itself in 1587, where they were to lead to the deaths of about 368 people, and as such it was perhaps the biggest mass execution in Europe during peacetime. In Denmark, the burning of witches increased following the reformation of 1536. Christian IV of Denmark, in particular, encouraged this practice, and hundreds of people were convicted of witchcraft and burned. In England, the Witchcraft Act of 1542 regulated the penalties for witchcraft. In Scotland, over seventy people were accused of witchcraft on account of bad weather when James VI of Scotland, who shared the Danish king’s interest in witch trials, sailed to Denmark in 1590 to meet his betrothed, Anne of Denmark. The sentence for an individual found guilty of witchcraft or sorcery during this time, and in previous centuries, typically included either burning at the stake or being tested with the “ordeal of cold water." Accused persons who drowned were considered innocent, and ecclesiastical authorities would proclaim them “brought back,” but those who floated were considered guilty of practicing witchcraft, and burned at the stake or executed in an unholy fashion. Decline of the Trials While the witch trials had begun to fade out across much of Europe by the mid-17th century, they continued to a greater extent on the fringes of Europe and in the American colonies. The clergy and intellectuals began to speak out against the trials from the late 16th century. Johannes Kepler in 1615 could only by the weight of his prestige keep his mother from being burned as a witch. The 1692 Salem witch trials were a brief outburst of witch hysteria in the New World at a time when the practice was already waning in Europe. Witch Trials and Women An estimated 75% to 85% of those accused in the early modern witch trials were women, and there is certainly evidence of misogyny on the part of those persecuting witches, evident from quotes such as “[It is] not unreasonable that this scum of humanity, [witches], should be drawn chiefly from the feminine sex” (Nicholas Rémy, c. 1595) or “The Devil uses them so, because he knows that women love carnal pleasures, and he means to bind them to his allegiance by such agreeable provocations.” In early modern Europe, it was widely believed that women were less intelligent than men and more susceptible to sin. Nevertheless, it has been argued that the supposedly misogynistic agenda of works on witchcraft has been greatly exaggerated, based on the selective repetition of a few relevant passages of the Malleus Maleficarum. Many modern scholars argue that the witch hunts cannot be explained simplistically as an expression of male misogyny, as indeed women were frequently accused by other women, to the point that witch hunts, at least at the local level of villages, have been described as having been driven primarily by “women’s quarrels.” Barstow (1994) claimed that a combination of factors, including the greater value placed on men as workers in the increasingly wage-oriented economy, and a greater fear of women as inherently evil, loaded the scales against women, even when the charges against them were identical to those against men. Thurston (2001) saw this as a part of the general misogyny of the late medieval and early modern periods, which had increased during what he described as “the persecuting culture” from what it had been in the early medieval period. Gunnar Heinsohn and Otto Steiger in a 1982 publication speculated that witch hunts targeted women skilled in midwifery specifically in an attempt to extinguish knowledge about birth control and “repopulate Europe” after the population catastrophe of the Black Death. Witch Trials in Russia Witch trials in Russia took a very different course from those held in Western Europe. The Catholic Church in Western Europe promoted the idea that witches and sorcerers had made a direct pact with the Devil; and encouraged peasantry to be on their guard, and on the watch for witches and sorcerers. They should be eliminated in the eyes of the church. In contrast, the Russian Orthodox Church held no such belief, and made no encouragement to the peasantry to spy and accuse their neighbors. Instead, the church believed sorcery and witchcraft to be a form of paganism. Paganism, which promoted the belief in many gods, stood in direct opposition to Christianity. To ensure that Orthodoxy was a core feature of a unified Russia, the Russian Orthodox Church asked Tsar Ivan IV to outlaw witchcraft in all forms in the mid-1500s. Under Tsar Ivan IV, witchcraft and sorcery were outlawed. However, the tsar stopped short of instituting a death penalty for witchcraft. He instead asserted that individuals accused of witchcraft be tried before a secular court—a surprising move for Ivan IV who is remembered better by his nickname, “Ivan the Terrible.” A far more surprising turn-of-events occurred in Russia under the “gentle” Romanov tsar, Alexei Mikhailovich, in the mid-1600s. Under social and religious pressure, the tsar instituted a death penalty for anyone found guilty of practicing sorcery. From the mid-1600s to the turn of the eighteenth century, Russia tried roughly one hundred people for witchcraft. Again, events in Russia stand in stark, surprising contrast to those in Western Europe. Most of the accused in Russia were men. Perhaps because of the status of men in Russian society and their association as healers. Of the accused, only a handful were found guilty and executed. Although part of the Russian record of their witch trials during this era is incomplete due to a fire, historians speculate that less than twenty people were executed during the Russian witch trials—a far different story than what occurred in Western Europe where witch hangings and burnings were celebrated as public spectacle. By the eighteenth century, the Enlightenment had reached Russia and witch trials disappeared entirely. Attributions Images courtesy of Wikimedia Commons: https://upload.wikimedia.org/wikipedia/commons/c/cb/Agnes_Sampson_and_witches_with_devil.jpg Text modified from: https://www.coursehero.com/study-guides/boundless-worldhistory/protestantism/
oercommons
2025-03-18T00:35:07.921297
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87864/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87865/overview
The Protestant Reformation Overview Causes of the Reformation The causes of the Reformation were rooted in historical developments in the Roman Catholic Church in the Late Middle Ages Learning Objectives - Identify the primary factors from the late medieval period that led to the Reformation. Keywords/Key Concepts Conciliar movement: a reform movement in the 14th-, 15th-, and 16th-century Catholic Church, emerging in response to the Western Schism between rival popes in Rome and Avignon, that held that supreme authority in the church resided with an Ecumenical council, apart from, or even against, the pope the Western Schism: a split within the Catholic Church from 1378 to 1418, when several men simultaneously claimed to be the true pope doctrine: list of beliefs and teachings by the church Indulgences: in Roman Catholic theology, a remission of the punishment that would otherwise be inflicted for a previously forgiven sin as a natural consequence of having sinned, which are granted for specific good works and prayers in proportion to the devotion with which those good works are performed or those prayers are recited Purgatory: according to Roman Catholic doctrine, this was a place of suffering where the souls of the Christian dead went to be purified and cleansed from sin before entering Heaven (The pope had the authority to grant Christians indulgences to be released from Purgatory.) Monastery: a place where a community of monks lived and worked (Monks (men) and Nuns (women) dedicated their lives to celibacy, poverty, and Christian living.) Avignon Papacy: a period from 1309 – 1376 when the Popes resided in Avignon in southern France instead of Rome (The Avignon Popes had a reputation for greed and corruption.) Scholasticism: a medieval system of philosophy that maintained that the doctrines of the ancient Greek philosopher Aristotle could be harmonized with the doctrines of the Roman Catholic Church Augustinian Theology: the teaching associated with the works of the Christian theologian Augustine (c. 400 CE) Discontent with the Roman Catholic Church The Protestant Reformation, often referred to simply as the Reformation, was a schism from the Roman Catholic Church initiated by Martin Luther and continued by other early Protestant reformers in Europe in the 16th century. Although there had been significant earlier attempts to reform the Roman Catholic Church before Luther—such as those of Jan Hus, Geert Groote, Thomas A Kempis, Peter Waldo, and John Wycliffe—Martin Luther is widely acknowledged to have started the Reformation with his 1517 work The Ninety-Five Theses. Luther began by criticizing the selling of indulgences, insisting that the Roman Catholic doctrine regarding purgatory and indulgences had no foundation in the gospel. The Protestant position would come to incorporate doctrinal changes, such as sola scriptura (by the scripture alone) and sola fide (by faith alone). The core motivation behind these changes was theological, though many other factors played a part: the rise of nationalism, the Western Schism that eroded faith in the papacy, the perceived corruption of the Roman Curia (the Council of Cardinals, who elected the Pope), the impact of humanism, and the new learning of the Renaissance that questioned much traditional thought. Roots of Unrest Following the breakdown of monasteries and scholasticism in late medieval Europe—accentuated by the Avignon Papacy, the Papal Schism, and the failure of the Conciliar movement, the 16th century saw a great cultural debate about religious reforms and later fundamental religious values. These issues initiated wars between princes, uprisings among peasants, and widespread concern over corruption in the Church, which sparked many reform movements within the church. These reformist movements occurred in conjunction with economic, political, and demographic forces that contributed to a growing disaffection with the wealth and power of the elite clergy, resulting in a population that was more critical of the financial and moral corruption of the Roman church at the time of the Renaissance. The major individualistic reform movements that revolted against medieval scholasticism, and the institutions that underpinned it, were humanism and devotionalism. In Germany, “the modern way,” or devotionalism, caught on in the universities and required a redefinition of God—who was no longer a rational governing principle but an arbitrary, unknowable will that could not be limited. God was now a ruler, and religion was more fervent and emotional. Thus, the ensuing revival of Augustinian theology—stating that man cannot be saved by his own efforts but only by the grace of God—would erode the legitimacy of the rigid institutions of the church meant to provide a channel for man to do good works and get into heaven. Humanism, however, was more of an educational reform movement with origins in the Renaissance’s revival of classical learning and thought. As a revolt against Aristotelian logic, it placed great emphasis on reforming individuals through eloquence as opposed to reason. The European Renaissance laid the foundation for the Northern humanists in its reinforcement of the traditional use of Latin as the great unifying language of European culture. The breakdown of the philosophical foundations of scholasticism was a threat to an institutional church supposedly serving as an intermediary between man and God. New thinking favored the notion that no religious doctrine can be supported by philosophical arguments, eroding the old alliance between reason and faith laid out by Thomas Aquinas—13th century leading scholastic philosopher—in the medieval period. Additionally, the great rise of the burghers (merchant class) and their desire to run their new businesses free of institutional barriers or outmoded cultural practices contributed to the appeal of humanist individualism. For many, papal institutions were rigid, especially regarding their views on just price and rejection of usury (interest rates on loans). In the north, burghers and monarchs were united in their frustration for the practice of not paying any taxes to the nation and, instead, collecting taxes from subjects and sending the revenues disproportionately to the Pope in Italy. Early Attempts at Reform The first of a series of disruptive and new perspectives came from John Wycliffe (14th century) at Oxford University in England, one of the earliest opponents of papal authority influencing secular power and an early advocate for translation of the Bible into the common language. Jan Hus at the University of Prague in the Kingdom of Bohemia (modern Czech Republic) was a follower of Wycliffe and similarly objected to some of the practices of the Roman Catholic Church. Hus wanted liturgy (public prayer in church services) in the language of the people (i.e. Czech), married priests, and to eliminate indulgences and the idea of purgatory. Hus spoke out against indulgences in 1412 when he delivered an address entitled Quaestio magistri Johannis Hus de indulgentiis (Questions of the teacher Jon Hus regarding Indulgences). It was taken literally from the last chapter of Wycliffe’s book, De ecclesia (Concerning the Church), and his treatise, De absolutione a pena et culpa (Absolution from punishment and sin). Hus asserted that no Pope or bishop had the right to take up the sword in the name of the Church; he should instead pray for his enemies and bless those that curse him. Furthermore, according to Hus, man obtains forgiveness of sins by true repentance, not money (paying for indulgences). The doctors of the theological faculty replied to these statements, but without success. A few days afterward, some of Hus’s followers burnt the papal bulls (letters with the commands of the Pope). Hus, they said, should be obeyed rather than the Church, which they considered a fraudulent mob of adulterers and Simonists (people who buy the position of priest). In response, three men from the lower classes who openly called the indulgences a fraud were beheaded. They were later considered the first martyrs of the Hussite Church. In the meantime, the faculty had condemned the forty-five articles and added several other theses, deemed heretical, that had originated with Hus. The king of Bohemia forbade the teaching of these articles, but neither Hus nor the university complied with the ruling, requesting that the articles should first be proven to be un-scriptural. The tumults at Prague had stirred up a sensation; papal legates and Archbishop Albik tried to persuade Hus to give up his opposition to the papal bulls, and the king made an unsuccessful attempt to reconcile the two parties. Hus was later condemned and burned at the stake, despite promise of safe-conduct, when he voiced his views to church leaders at the Council of Constance (1414–1418). Wycliffe, who died in 1384, was also declared a heretic by the Council of Constance, and his corpse was exhumed and burned. Martin Luther The Protestant Revolution began with the call for reform by the German monk, Martin Luther. Learning Objectives - Explain Luther’s criticisms of Catholicism. - Identify the key features of Luther’s teachings. Key Terms / Key Concepts Excommunication: an institutional act of religious censure used to deprive, suspend, or limit membership in a religious community or to restrict certain rights within it Ninety-five Theses: a list of propositions for an academic disputation written by Martin Luther in 1517. They advanced Luther’s positions against what he saw as abusive practices by religious leaders Indulgences: certificates that would reduce the temporal punishment for sins committed by the purchaser or their loved ones in purgatory Purgatory: a place where Christian souls went to be purified of their sins after death before they could be allowed to enter Heaven, according to Roman Catholic teachings Martin Luther Martin Luther (November 10, 1483 – February 18, 1546) was a German professor of theology, composer, priest, monk, and seminal figure in the Protestant Reformation. Luther came to reject several teachings and practices of the Roman Catholic Church. He strongly disputed the claim that freedom from God’s punishment for sin could be purchased with money, which led him to propose an academic discussion of the practice and efficacy of indulgences in his Ninety-five Theses of 1517. His refusal to renounce all his writings at the demand of Pope Leo X in 1520, and the Holy Roman Emperor Charles V at the Diet of Worms in 1521, resulted in his excommunication by the pope and his condemnation as an outlaw by the emperor. Luther taught that salvation and, subsequently, eternal life are not earned by good deeds but are received only as the free gift of God’s grace through the believer’s faith in Jesus Christ as redeemer from sin. His theology challenged the authority and office of the pope by teaching that the Bible is the only source of divinely revealed knowledge from God, and it opposed priestly intervention for the forgiveness of sins because he considered all baptized Christians to be a holy priesthood. Those who identify with these, and all of Luther’s wider teachings, are called Lutherans; however, Luther insisted on the term Christian or Evangelical as the only acceptable names for individuals who professed Christ. His translation of the Bible into the vernacular (German, instead of Latin) made it more accessible to the laity (non-clergy), an event that had a tremendous impact on both the church and German culture. It fostered the development of a standard version of the German language, added several principles to the art of translation, and influenced the writing of an English translation—the Tyndale Bible. His hymns influenced the development of singing in Protestant churches. And his marriage to Katharina von Bora, a former nun, set a model for the practice of clerical marriage, allowing Protestant clergy to marry. In two of his later works, Luther expressed antagonistic views toward Jews, writing that Jewish homes and synagogues should be destroyed, their money confiscated, and their liberty curtailed. Condemned by virtually every Lutheran denomination, these statements and their influence on antisemitism have contributed to his controversial status. Personal Life Martin Luther was born in 1483, in Eisleben, Saxony, then part of the Holy Roman Empire. His father, Hans Luther was ambitious for himself and his family, and he was determined to see Martin, his eldest son, become a lawyer. In 1501, at the age of nineteen, Martin entered the University of Erfurt. In accordance with his father’s wishes, he enrolled in law school, but dropped out almost immediately, due to a sense of uncertainty in his life. Luther sought assurances about life and was drawn to theology and philosophy, expressing particular interest in Aristotle, William of Ockham, and Gabriel Biel. Philosophy, however, proved to be unsatisfying, offering assurance about the use of reason but no assurance about loving God, which to Luther was more important. Reason could not lead men to God, he felt, and he thereafter developed a love-hate relationship with Aristotle over the latter’s emphasis on reason. For Luther, reason could be used to question men and institutions, but not God. Human beings could learn about God only through divine revelation, he believed, and scripture therefore became increasingly important to him. Luther dedicated himself as a monk to the Augustinian order, devoting himself to fasting, long hours in prayer, pilgrimage, and frequent confession. In 1507, he was ordained to the priesthood, and in 1508, von Staupitz, first dean of the newly founded University of Wittenberg, sent for Luther to teach theology. Start of the Reformation In 1516, Johann Tetzel, a Dominican friar and papal commissioner for indulgences, was sent to Germany by the Roman Catholic Church to sell indulgences to raise money tofor rebuilding St. Peter’s Basilica in Rome. Roman Catholic theology stated that faith alone, whether fiduciary or dogmatic, cannot justify man; justification rather depends only on such faith as is active in charity and good works. The benefits of good works could be obtained by donating money to the church. On October 31, 1517, Luther wrote to his bishop, Albert of Mainz, protesting the sale of indulgences. He enclosed in his letter a copy of his “Disputation of Martin Luther on the Power and Efficacy of Indulgences,” which came to be known as the Ninety-five Theses. Luther had no intention of confronting the church, rather, he but saw his disputation as a scholarly objection to church practices. The purpose of the writing was a search for answers rather than a statement of faith. In the first few theses Luther develops the idea of repentance as the Christian’s inner struggle with sin rather than the external system of sacramental confession. The first thesis has become famous: “When our Lord and Master Jesus Christ said, ‘Repent,’ he willed the entire life of believers to be one of repentance.” In theses 41–47 Luther begins to criticize indulgences on the basis that they discourage works of mercy by those who purchase them. Here he begins to use the phrase, “Christians are to be taught…” to state how he thinks people should be instructed on the value of indulgences. They should be taught that giving to the poor is incomparably more important than buying indulgences, that buying an indulgence rather than giving to the poor invites God’s wrath, and that doing good works makes a person better while buying indulgences does not. There is an undercurrent of challenge in several of the theses, particularly in Thesis 86, which asks, “Why does the pope, whose wealth today is greater than the wealth of the richest Crassus, build the basilica of St. Peter with the money of poor believers rather than with his own money?” Luther objected to a saying attributed to Johann Tetzel that “As soon as the coin in the coffer rings, the soul from purgatory springs.” He insisted that, since forgiveness was God’s alone to grant, those who claimed that indulgences absolved buyers from all punishments and granted them salvation were in error. Luther closes the Theses by exhorting Christians to imitate Christ even if it brings pain and suffering, because enduring punishment and entering heaven is preferable to false security. It was not until January 1518 that friends of Luther translated the Ninety-five Theses from Latin into German and printed and widely copied it, making the controversy one of the first to be aided by the printing press. Within two weeks, copies of the theses had spread throughout Germany; within two months, they had spread throughout Europe. Excommunication and Later Life On June 15, 1520, the pope warned Luther, with the papal bull (a public decree) Exsurge Domine, that he risked excommunication unless he recanted forty-one sentences drawn from his writings, including the Ninety-five Theses, within sixty days. That autumn, Johann Eck proclaimed the bull in Meissen and other towns. Karl von Miltitz, a papal nuncio, attempted to broker a solution, but Luther, who had sent the pope a copy of On the Freedom of a Christian in October, publicly set fire to the bull at Wittenberg on December 10, 1520, an act he defended in Why the Pope and his Recent Book are Burned and Assertions Concerning All Articles. As a consequence of these actions, Luther was excommunicated by Pope Leo X on January 3, 1521, in the bull Decet Romanum Pontificem. The enforcement of the ban on the Ninety-five Theses fell to the secular authorities. On April 18, 1521, Luther appeared as ordered before the Diet of Worms. This was a general assembly (diet), where the representatives of the different states of the Holy Roman Empire met in Worms, a town on the Rhine. It was conducted from January 28 to May 25, 1521, with Emperor Charles V presiding. Prince Frederick III, Elector of Saxony, obtained safe conduct for Luther to and from the meeting. Johann Eck, speaking on behalf of the empire as assistant to the Archbishop of Trier, presented Luther with copies of his writings laid out on a table and asked him if the books were his, and whether he stood by their contents. Luther confirmed he was their author but requested time to think about the answer to the second question. He prayed, consulted friends, and gave his response the next day: “Unless I am convinced by the testimony of the Scriptures or by clear reason (for I do not trust either in the pope or in councils alone, since it is well known that they have often erred and contradicted themselves), I am bound by the Scriptures I have quoted and my conscience is captive to the Word of God. I cannot and will not recant anything, since it is neither safe nor right to go against conscience. May God help me. Amen.” Over the next five days, private conferences were held to determine Luther’s fate. The emperor presented the final draft of the Edict of Worms on May 25, 1521, which declared Luther an outlaw, banned his literature, and required his arrest. The Edict stated: “We want him to be apprehended and punished as a notorious heretic.” Additionally, it made it a crime for anyone in Germany to give Luther food or shelter and permitted anyone to kill Luther without legal consequence.Before Luther could be punished by execution, Frederick "the Wise", the Elector of Saxony, provided a safe haven for Luther at his castle at Wartburg. Frederick was the ruler of Saxony and one of the seven "Electors" who elected the Holy Roman emperor. At Wartburg under the protection of Frederick, Luther continued to write letters and sermons to encourage and educate other champions of Church reform across the Holy Roman Empire and the rest of Europe. In 1530 another imperial diet convened at Augsburg and again condemned Luther as a heretic, At this diet, however, some of the princes assembled "protested" this condemnation and instead supported Luther. Hereafter, supporters of Luther became known as "Protestants". Over time this term referred to all the new churches which broke away from the Roman Catholic Church. Luther found himself increasingly occupied in organizing a new church, later called the Lutheran Church, for the rest of his life until his death in 1546. Calvinism John Calvin was another Protestant reformer besides Martin Luther. Calvin’s doctrines inspired a number of new Protestant churches across Europe. Learning Objectives - Identify the main points John Calvin’s theology and compare and contrast it with Lutheranism. Key Terms / Key Concepts Five Points of Calvinism: the basic theological tenets of Calvinism Huguenots: a name for French Protestants, originally a derisive term Predestination: the doctrine that all events have been willed by God, usually with reference to the eventual fate of the individual soul Calvinism Calvinism is a major branch of Protestantism that follows the theological tradition and forms of Christian practice set forth by John Calvin and other Reformation-era theologians. Calvinists broke with the Roman Catholic Church but differed from Lutherans on the real presence of Christ in the Eucharist, theories of worship, and the use of God’s law for believers, among other things. Calvinism can be a misleading term because the religious tradition it denotes is and has always been diverse, with a wide range of influences rather than a single founder. The movement was first called Calvinism by Lutherans who opposed it, but many within the tradition would prefer to use the word Reformed. While the Reformed theological tradition addresses all of the traditional topics of Christian theology, the word Calvinism is sometimes used to refer to particular Calvinist views on soteriology (the saving of the soul from sin and death) and predestination, which are summarized in part by the Five Points of Calvinism. Some have also argued that Calvinism, as a whole, stresses the sovereignty or rule of God in all things, including salvation. An important tenet of Calvinism, which differs from Lutheranism, is that God only saves the “elect,” a predestined group of individuals, and that those elect are essentially guaranteed salvation, but everyone else is damned. Origins and Rise of Calvinism First-generation Reformed theologians include Huldrych Zwingli (1484 – 1531), Martin Bucer (1491 – 1551), Wolfgang Capito (1478 – 1541), John Oecolampadius (1482 – 1531), and Guillaume Farel (1489 – 1565). These reformers came from diverse academic backgrounds, but later distinctions within Reformed theology can already be detected in their thought, especially the priority of scripture as a source of authority. Scripture was also viewed as a unified whole, which led to a covenantal theology of the sacraments of baptism and the Lord’s Supper as visible signs of the covenant of grace. Another Reformed distinctive present in these theologians was their denial of the bodily presence of Christ in the Lord’s Supper. Each of these theologians also understood salvation to be by grace alone and affirmed a doctrine of particular election (the teaching that some people are chosen by God for salvation). Martin Luther and his successor Philipp Melanchthon were undoubtedly significant influences on these theologians, and to a larger extent later Reformed theologians. For instance, the doctrine of justification by faith alone was a direct inheritance from Luther. Following the excommunication of Luther and condemnation of the Reformation by the pope, the work and writings of John Calvin were influential in establishing a loose consensus among various groups in Switzerland, Scotland, Hungary, Germany, and elsewhere. After the expulsion of Geneva’s bishop in 1526, and the unsuccessful attempts of the Berne reformer Guillaume (William) Farel, Calvin was asked to use the organizational skill he had gathered as a student of law in France to discipline the “fallen city.” His “Ordinances” of 1541 involved a collaboration of church affairs with the city council and consistory (council of clergy) to bring morality to all areas of life. After the establishment of the Geneva academy in 1559, Geneva became the unofficial capital of the Protestant movement, providing refuge for Protestant exiles from all over Europe and educating them as Calvinist missionaries. These missionaries dispersed Calvinism widely, and formed the French Huguenots in Calvin’s own lifetime, as well as caused the conversion of Scotland under the leadership of the cantankerous John Knox in 1560. The faith continued to spread after Calvin’s death in 1563, and it reached as far as Constantinople by the start of the 17th century. Calvin’s Institutes of the Christian Religion (1536 – 1559) was one of the most influential theologies of the era. The book was written as an introductory textbook on the Protestant faith for those with some previous knowledge of theology, and it covered a broad range of theological topics, from the doctrines of church and sacraments to justification by faith alone, as well as Christian liberty. It vigorously attacked the teachings Calvin considered unorthodox, particularly Roman Catholicism, to which Calvin says he had been “strongly devoted” before his conversion to Protestantism. Controversies in France Protestantism spread into France, where the Protestants were derisively nicknamed “Huguenots,” and this touched off decades of warfare in France. Huguenots faced persecution in France, but many French Huguenots still contributed to the Protestant movement, including many who emigrated to other countries, most notably John Calvin, who settled in Geneva. Calvin continued to take an interest in the religious affairs of his native land and, from his base in Geneva, beyond the reach of the French king, regularly trained pastors to lead congregations in France. Despite heavy persecution by Henry II, the Reformed Church of France, largely Calvinist in direction, made steady progress across large sections of the nation, in the urban bourgeoisie and parts of the aristocracy, appealing to people alienated by the perceived corruption of the Catholic establishment. Calvinist Theology The “Five Points of Calvinism” summarize the faith’s basic tenets, although some historians contend that it distorts the nuance of Calvin’s own theological positions. The Five Points: - “Total depravity” asserts that as a consequence of the fall of man into sin, every person is enslaved to sin. People are not by nature inclined to love God, but rather to serve their own interests and to reject the rule of God. Thus, all people by their own faculties are morally unable to choose to follow God and be saved because they are unwilling to do so out of the necessity of their own natures. - “Unconditional election” asserts that God has chosen from eternity those whom he will bring to himself not based on foreseen virtue, merit, or faith in those people; rather, his choice is unconditionally grounded in his mercy alone. God has chosen from eternity to extend mercy to those he has chosen and to withhold mercy from those not chosen. Those chosen receive salvation through Christ alone. Those not chosen receive the just wrath that is warranted for their sins against God. - “Limited atonement” asserts that Jesus’s substitutionary atonement was definite and certain in its purpose and in what it accomplished. This implies that only the sins of the elect were atoned for by Jesus’s death. Calvinists do not believe, however, that the atonement is limited in its value or power, but rather that the atonement is limited in the sense that it is intended for some and not all. All Calvinists would affirm that the blood of Christ was sufficient to pay for every single human being IF it were God’s intention to save every single human being. - “Irresistible grace” asserts that the saving grace of God is effectually applied to those whom he has determined to save (that is, the elect) and overcomes their resistance to obeying the call of the gospel, bringing them to a saving faith. This means that when God sovereignly purposes to save someone, that individual certainly will be saved. The doctrine holds that this purposeful influence of God’s Holy Spirit cannot be resisted. - “Perseverance of the saints” asserts that since God is sovereign and his will cannot be frustrated by humans or anything else, those whom God has called into communion with himself will continue in faith until the end. Anabaptism Anabaptist (or Baptists) arose as one branch of the Protestant Reformation. Learning Objectives - Discuss Anabaptism and why its adherents were persecuted throughout Europe by both Catholics and Protestants. Key Terms / Key Concepts Ulrich Zwingli: a leader of the Reformation in Switzerland who clashed with the Anabaptists infant baptism: the practice of baptizing infants or young children, sometimes contrasted with what is called “believer’s baptism,” which is the religious practice of baptizing only individuals who personally confess faith in Jesus Magisterial Protestants: a phrase that names the manner in which the Lutheran and Calvinist reformers related to secular authorities, such as princes, magistrates, or city councils; opposed to the Radical Protestants Anabaptism Anabaptism is a Christian movement that traces its origins to the Radical Reformation in Europe. Some consider this movement to be an offshoot of European Protestantism, while others see it as a separate and distinct development. Anabaptists are Christians who believe in delaying baptism until the candidate confesses his or her faith in Christ, as opposed to being baptized as an infant. The Amish, Hutterites, and Mennonites are direct descendants of the movement. Schwarzenau Brethren, Bruderhof, and the Apostolic Christian Church are considered later developments among the Anabaptists. The name Anabaptist means “one who baptizes again.” Their persecutors named them this, referring to the practice of baptizing persons when they converted or declared their faith in Christ, even if they had been “baptized” as infants. Anabaptists required that baptismal candidates be able to make a confession of faith that was freely chosen, and thus rejected baptism of infants. The early members of this movement did not accept the name Anabaptist, claiming that infant baptism was not part of scripture and was, therefore, null and void. They said that baptizing self-confessed believers was their first true baptism. Balthasar Hubmaier wrote: “I have never taught Anabaptism…But the right baptism of Christ, which is preceded by teaching and oral confession of faith, I teach, and say that infant baptism is a robbery of the right baptism of Christ.” Anabaptists were heavily persecuted by both Magisterial Protestants and Roman Catholics during the 16th century and into the 17th century because of their views on the nature of baptism and other issues. Anabaptists were persecuted largely because of their interpretation of scripture that put them at odds with official state church interpretations and government. Most Anabaptists adhered to a literal interpretation of the Sermon on the Mount, which precluded taking oaths, participating in military actions, and participating in civil government. Some who practiced re-baptism, however, felt otherwise, and complied with these requirements of civil society. They were thus technically Anabaptists, even though conservative Amish, Mennonites, and Hutterites, and some historians, tend to consider them as outside of true Anabaptism. Origins Anabaptism in Switzerland began as an offshoot of the church reforms instigated by Ulrich Zwingli. As early as 1522, it became evident that Zwingli was on a path of reform preaching, when he began to question or criticize such Catholic practices as tithes, the mass, and even infant baptism. Zwingli had gathered a group of reform-minded men around him, with whom he studied classical literature and the scriptures. However, some of these young men began to feel that Zwingli was not moving fast enough in his reform. The division between Zwingli and his more radical disciples became apparent in an October 1523 disputation held in Zurich. When the discussion of the mass was about to be ended without making any actual change in practice, Conrad Grebel stood up and asked, “What should be done about the mass?” Zwingli responded by saying the council would make that decision. At this point, Simon Stumpf, a radical priest from Hongg, answered, “The decision has already been made by the Spirit of God.” This incident illustrated clearly that Zwingli and his more radical disciples had different expectations. To Zwingli, the reforms would only go as fast as the city council allowed them. To the radicals, the council had no right to make that decision, but rather the Bible was the final authority on church reform. Feeling frustrated, some of them would begin to meet on their own for Bible study. The city council also ruled in 1525 that all who refused to baptize their infants within one week should be expelled from Zurich. Since Conrad Grebel had refused to baptize his daughter Rachel, born on January 5, 1525, the council decision was extremely personal to him and others who had not baptized their children. As early as 1523, William Reublin had begun to preach against infant baptism in villages surrounding Zurich, encouraging parents to not baptize their children. Thus, when sixteen of the radicals met on Saturday evening, January 21, 1525, the situation seemed particularly dark. At that meeting Grebel baptized George Blaurock, and Blaurock in turn baptized several others immediately. These baptisms were the first “re-baptisms” known in the movement. This continues to be the most widely accepted date posited for the establishment of Anabaptism. Anabaptism then spread to Tyrol (modern-day Austria), South Germany, Moravia, the Netherlands, and Belgium. Persecutions Roman Catholics and Protestants alike persecuted the Anabaptists, resorting to torture and execution in attempts to curb the growth of the movement. The Protestants under Zwingli were the first to persecute the Anabaptists, with Felix Manz becoming the first martyr in 1527. On May 20, 1527, Roman Catholic authorities executed Michael Sattler. King Ferdinand of Hungary declared drowning (called the third baptism) “the best antidote to Anabaptism.” The Tudor regime, even the Protestant monarchs (Edward VI of England and Elizabeth I of England), persecuted Anabaptists, as they were deemed too radical and, therefore, a danger to religious stability. Martyrs Mirror, by Thieleman J. van Braght, describes the persecution and execution of thousands of Anabaptists in various parts of Europe between 1525 and 1660. Continuing persecution in Europe was largely responsible for the mass emigrations to North America by Amish, Hutterites, and Mennonites. Primary Sources The 95 Theses of Martin Luther A Primary Source Document Provided By A.C.T.S. Disputation of Martin Luther on the Power and Efficacy of Indulgences by Dr. Martin Luther (1517) Published in: Works of Martin Luther: Adolph Spaeth, L.D. Reed, Henry Eyster Jacobs, et Al., Trans. & Eds. (Philadelphia: A. J. Holman Company, 1915), Vol.1, pp. 29-38 _______________ 1. Our Lord and Master Jesus Christ, when He said Poenitentiam agite, willed that the whole life of believers should be repentance. 2. This word cannot be understood to mean sacramental penance, i.e., confession and satisfaction, which is administered by the priests. 3. Yet it means not inward repentance only; nay, there is no inward repentance which does not outwardly work divers mortifications of the flesh. 4. The penalty [of sin], therefore, continues so long as hatred of self continues; for this is the true inward repentance, and continues until our entrance into the kingdom of heaven. 5. The pope does not intend to remit, and cannot remit any penalties other than those which he has imposed either by his own authority or by that of the Canons. 6. The pope cannot remit any guilt, except by declaring that it has been remitted by God and by assenting to God's remission; though, to be sure, he may grant remission in cases reserved to his judgment. If his right to grant remission in such cases were despised, the guilt would remain entirely unforgiven. 7. God remits guilt to no one whom He does not, at the same time, humble in all things and bring into subjection to His vicar, the priest. 8. The penitential canons are imposed only on the living, and, according to them, nothing should be imposed on the dying. 9. Therefore the Holy Spirit in the pope is kind to us, because in his decrees he always makes exception of the article of death and of necessity. 10. Ignorant and wicked are the doings of those priests who, in the case of the dying, reserve canonical penances for purgatory. 11. This changing of the canonical penalty to the penalty of purgatory is quite evidently one of the tares that were sown while the bishops slept. 12. In former times the canonical penalties were imposed not after, but before absolution, as tests of true contrition. 13. The dying are freed by death from all penalties; they are already dead to canonical rules, and have a right to be released from them. 14. The imperfect health [of soul], that is to say, the imperfect love, of the dying brings with it, of necessity, great fear; and the smaller the love, the greater is the fear. 15. This fear and horror is sufficient of itself alone (to say nothing of other things) to constitute the penalty of purgatory, since it is very near to the horror of despair. 16. Hell, purgatory, and heaven seem to differ as do despair, almost-despair, and the assurance of safety. 17. With souls in purgatory it seems necessary that horror should grow less and love increase. 18. It seems unproved, either by reason or Scripture, that they are outside the state of merit, that is to say, of increasing love. 19. Again, it seems unproved that they, or at least that all of them, are certain or assured of their own blessedness, though we may be quite certain of it. 20. Therefore by "full remission of all penalties" the pope means not actually "of all," but only of those imposed by himself. 21. Therefore those preachers of indulgences are in error, who say that by the pope's indulgences a man is freed from every penalty, and saved; 22. Whereas he remits to souls in purgatory no penalty which, according to the canons, they would have had to pay in this life. 23. If it is at all possible to grant to any one the remission of all penalties whatsoever, it is certain that this remission can be granted only to the most perfect, that is, to the very fewest. 24. It must needs be, therefore, that the greater part of the people are deceived by that indiscriminate and high sounding promise of release from penalty. 25. The power which the pope has, in a general way, over purgatory, is just like the power which any bishop or curate has, in a special way, within his own diocese or parish. 26. The pope does well when he grants remission to souls [in purgatory], not by the power of the keys (which he does not possess), but by way of intercession. 27. They preach man who say that so soon as the penny jingles into the money-box, the soul flies out [of purgatory]. 28. It is certain that when the penny jingles into the money-box, gain and avarice can be increased, but the result of the intercession of the Church is in the power of God alone. 29. Who knows whether all the souls in purgatory wish to be bought out of it, as in the legend of Sts. Severinus and Paschal. 30. No one is sure that his own contrition is sincere; much less that he has attained full remission. 31. Rare as is the man that is truly penitent, so rare is also the man who truly buys indulgences, i.e., such men are most rare. 32. They will be condemned eternally, together with their teachers, who believe themselves sure of their salvation because they have letters of pardon. 33. Men must be on their guard against those who say that the pope's pardons are that inestimable gift of God by which man is reconciled to Him; 34. For these "graces of pardon" concern only the penalties of sacramental satisfaction, and these are appointed by man. 35. They preach no Christian doctrine who teach that contrition is not necessary in those who intend to buy souls out of purgatory or to buy confessionalia. 36. Every truly repentant Christian has a right to full remission of penalty and guilt, even without letters of pardon. 37. Every true Christian, whether living or dead, has part in all the blessings of Christ and the Church; and this is granted him by God, even without letters of pardon. 38. Nevertheless, the remission and participation [in the blessings of the Church] which are granted by the pope are in no way to be despised, for they are, as I have said, the declaration of divine remission. 39. It is most difficult, even for the very keenest theologians, at one and the same time to commend to the people the abundance of pardons and [the need of] true contrition. 40. True contrition seeks and loves penalties, but liberal pardons only relax penalties and cause them to be hated, or at least, furnish an occasion [for hating them]. 41. Apostolic pardons are to be preached with caution, lest the people may falsely think them preferable to other good works of love. 42. Christians are to be taught that the pope does not intend the buying of pardons to be compared in any way to works of mercy. 43. Christians are to be taught that he who gives to the poor or lends to the needy does a better work than buying pardons; 44. Because love grows by works of love, and man becomes better; but by pardons man does not grow better, only more free from penalty. 45. 45. Christians are to be taught that he who sees a man in need, and passes him by, and gives [his money] for pardons, purchases not the indulgences of the pope, but the indignation of God. 46. Christians are to be taught that unless they have more than they need, they are bound to keep back what is necessary for their own families, and by no means to squander it on pardons. 47. Christians are to be taught that the buying of pardons is a matter of free will, and not of commandment. 48. Christians are to be taught that the pope, in granting pardons, needs, and therefore desires, their devout prayer for him more than the money they bring. 49. Christians are to be taught that the pope's pardons are useful, if they do not put their trust in them; but altogether harmful, if through them they lose their fear of God. 50. Christians are to be taught that if the pope knew the exactions of the pardon-preachers, he would rather that St. Peter's church should go to ashes, than that it should be built up with the skin, flesh and bones of his sheep. 51. Christians are to be taught that it would be the pope's wish, as it is his duty, to give of his own money to very many of those from whom certain hawkers of pardons cajole money, even though the church of St. Peter might have to be sold. 52. The assurance of salvation by letters of pardon is vain, even though the commissary, nay, even though the pope himself, were to stake his soul upon it. 53. They are enemies of Christ and of the pope, who bid the Word of God be altogether silent in some Churches, in order that pardons may be preached in others. 54. Injury is done the Word of God when, in the same sermon, an equal or a longer time is spent on pardons than on this Word. 55. It must be the intention of the pope that if pardons, which are a very small thing, are celebrated with one bell, with single processions and ceremonies, then the Gospel, which is the very greatest thing, should be preached with a hundred bells, a hundred processions, a hundred ceremonies. 56. The "treasures of the Church," out of which the pope. grants indulgences, are not sufficiently named or known among the people of Christ. 57. That they are not temporal treasures is certainly evident, for many of the vendors do not pour out such treasures so easily, but only gather them. 58. Nor are they the merits of Christ and the Saints, for even without the pope, these always work grace for the inner man, and the cross, death, and hell for the outward man. 59. St. Lawrence said that the treasures of the Church were the Church's poor, but he spoke according to the usage of the word in his own time. 60. Without rashness we say that the keys of the Church, given by Christ's merit, are that treasure; 61. For it is clear that for the remission of penalties and of reserved cases, the power of the pope is of itself sufficient. 62. The true treasure of the Church is the Most Holy Gospel of the glory and the grace of God. 63. But this treasure is naturally most odious, for it makes the first to be last. 64. On the other hand, the treasure of indulgences is naturally most acceptable, for it makes the last to be first. 65. Therefore the treasures of the Gospel are nets with which they formerly were wont to fish for men of riches. 66. The treasures of the indulgences are nets with which they now fish for the riches of men. 67. The indulgences which the preachers cry as the "greatest graces" are known to be truly such, in so far as they promote gain. 68. Yet they are in truth the very smallest graces compared with the grace of God and the piety of the Cross. 69. Bishops and curates are bound to admit the commissaries of apostolic pardons, with all reverence. 70. But still more are they bound to strain all their eyes and attend with all their ears, lest these men preach their own dreams instead of the commission of the pope. 71. He who speaks against the truth of apostolic pardons, let him be anathema and accursed! 72. But he who guards against the lust and license of the pardon-preachers, let him be blessed! 73. The pope justly thunders against those who, by any art, contrive the injury of the traffic in pardons. 74. But much more does he intend to thunder against those who use the pretext of pardons to contrive the injury of holy love and truth. 75. To think the papal pardons so great that they could absolve a man even if he had committed an impossible sin and violated the Mother of God -- this is madness. 76. We say, on the contrary, that the papal pardons are not able to remove the very least of venial sins, so far as its guilt is concerned. 77. It is said that even St. Peter, if he were now Pope, could not bestow greater graces; this is blasphemy against St. Peter and against the pope. 78. We say, on the contrary, that even the present pope, and any pope at all, has greater graces at his disposal; to wit, the Gospel, powers, gifts of healing, etc., as it is written in I. Corinthians xii. 79. To say that the cross, emblazoned with the papal arms, which is set up [by the preachers of indulgences], is of equal worth with the Cross of Christ, is blasphemy. 80. The bishops, curates and theologians who allow such talk to be spread among the people, will have an account to render. 81. This unbridled preaching of pardons makes it no easy matter, even for learned men, to rescue the reverence due to the pope from slander, or even from the shrewd questionings of the laity. 82. To wit: -- "Why does not the pope empty purgatory, for the sake of holy love and of the dire need of the souls that are there, if he redeems an infinite number of souls for the sake of miserable money with which to build a Church? The former reasons would be most just; the latter is most trivial." 83. Again: -- "Why are mortuary and anniversary masses for the dead continued, and why does he not return or permit the withdrawal of the endowments founded on their behalf, since it is wrong to pray for the redeemed?" 84. Again: -- "What is this new piety of God and the pope, that for money they allow a man who is impious and their enemy to buy out of purgatory the pious soul of a friend of God, and do not rather, because of that pious and beloved soul's own need, free it for pure love's sake?" 85. Again: -- "Why are the penitential canons long since in actual fact and through disuse abrogated and dead, now satisfied by the granting of indulgences, as though they were still alive and in force?" 86. Again: -- "Why does not the pope, whose wealth is to-day greater than the riches of the richest, build just this one church of St. Peter with his own money, rather than with the money of poor believers?" 87. Again: -- "What is it that the pope remits, and what participation does he grant to those who, by perfect contrition, have a right to full remission and participation?" 88. Again: -- "What greater blessing could come to the Church than if the pope were to do a hundred times a day what he now does once, and bestow on every believer these remissions and participations?" 89. "Since the pope, by his pardons, seeks the salvation of souls rather than money, why does he suspend the indulgences and pardons granted heretofore, since these have equal efficacy?" 90. To repress these arguments and scruples of the laity by force alone, and not to resolve them by giving reasons, is to expose the Church and the pope to the ridicule of their enemies, and to make Christians unhappy. 91. If, therefore, pardons were preached according to the spirit and mind of the pope, all these doubts would be readily resolved; nay, they would not exist. 92. Away, then, with all those prophets who say to the people of Christ, "Peace, peace," and there is no peace! 93. Blessed be all those prophets who say to the people of Christ, "Cross, cross," and there is no cross! 94. Christians are to be exhorted that they be diligent in following Christ, their Head, through penalties, deaths, and hell; 95. And thus be confident of entering into heaven rather through many tribulations, than through the assurance of peace. This text was converted to ASCII text for Project Wittenberg by Allen Mulvey, and is in the public domain. You may freely distribute, copy or print this text. https://www.americancatholictruthsociety.com/docs/95Theses.htm Atrributions Title Image https://commons.wikimedia.org/wiki/File:Martin_Luther,_1529.jpg Martin Luther - Lucas Cranach the Elder, Public domain, via Wikimedia Commons Adapted from: https://courses.lumenlearning.com/boundless-worldhistory/chapter/protestantism/ https://creativecommons.org/licenses/by-sa/4.0/
oercommons
2025-03-18T00:35:07.975686
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87865/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87866/overview
The English Reformation Overview The English Reformation In England, the Protestant Reformation began as an initiative of the English crown for political reasons, but over time gained support from the general population. Learning Objectives - Discuss the major causes and contours of the English Reformation and its key developments. Key Terms / Key Concepts Canon Law: the body of laws and regulations made by ecclesiastical authority (church leadership), for the government of a Christian organization or church and its members Annulment: legal term for declaring a marriage null and void. (Unlike divorce, it is usually retroactive, meaning that this kind of marriage is considered to be invalid from the beginning, almost as if it had never taken place. Annulment is closely associated with the Catholic Church, which does not permit divorce, teaching that marriage is a lifelong commitment that cannot be dissolved through divorce.) Nationalism: a belief, creed, or political ideology that involves an individual identifying with, or becoming attached to, one’s country of origin Puritans: group of English Protestants in the 16th and 17th centuries founded by some exiles from the clergy shortly after the accession of Elizabeth I of England The English Reformation The English Reformation was a series of events in 16th-century England by which the Church of England broke away from the authority of the pope and the Roman Catholic Church. The English Reformation was, in part, associated with the wider process of the European Protestant Reformation—a religious and political movement that affected the practice of Christianity across most of Europe during this period. Many factors contributed to the process: the decline of feudalism and the rise of nationalism, the rise of the common law, the invention of the printing press and increased circulation of the Bible, and the transmission of new knowledge and ideas among scholars, the upper and middle classes, and readers in general. However, the various phases of the English Reformation, which also covered Wales and Ireland, were largely driven by changes in government policy, to which public opinion gradually accommodated itself. Role of Henry VIII and Royal Marriages Henry VIII ascended the English throne in 1509 at the age of seventeen. He made a dynastic marriage with Catherine of Aragon, widow of his brother, Arthur, in June 1509, just before his coronation on Midsummer’s Day. Unlike his father, who was secretive and conservative, the young Henry appeared the epitome of chivalry and sociability. An observant Roman Catholic, he heard up to five masses a day (except during the hunting season). He let himself be influenced by his advisors from whom he was never apart. He was thus susceptible to whomever had his ear. This contributed to a state of hostility between his young contemporaries and the Lord Chancellor, Cardinal Thomas Wolsey. As long as Wolsey had his ear, Henry’s Roman Catholicism was secure. In 1521, Wolsey helped Henry defend the Roman Catholic Church from Martin Luther’s accusations of heresy in a book Henry wrote—probably with considerable help from the conservative Bishop of Rochester, John Fisher—entitled The Defence of the Seven Sacraments; for this Henry VIII was awarded the title “Defender of the Faith” by Pope Leo X. Wolsey’s enemies at court included those who had been influenced by Lutheran ideas, among whom was the attractive, charismatic Anne Boleyn, who became the mistress of Henry VIII. Anne arrived at court in 1522 from years in France, where she had been educated by Queen Claude of France. Anne served as maid of honor to Queen Catherine. She was a woman of renowned for her charm, style, and wit. By the late 1520s, Henry wanted his marriage to Catherine annulled, so that he could marry Anne. Catherine had not produced a male heir who survived into adulthood, and Henry wanted a son to secure the Tudor dynasty. Henry claimed that this lack of a male heir was because his marriage was in his words “blighted in the eyes of God”; Catherine had been his late brother’s wife, and it was therefore against biblical teachings for Henry to have married her—a special dispensation from Pope Julius II had been needed to allow the wedding to take place. Henry argued that this had been wrong and that his marriage had never been valid. In 1527 Henry asked Pope Clement VII to annul the marriage, but the pope refused. According to Canon Law the pope cannot annul a marriage on the basis of a canonical impediment previously dispensed. Clement also feared the wrath of Catherine’s nephew, Holy Roman Emperor Charles V, whose troops earlier that year had sacked Rome and briefly taken the pope prisoner. In 1529 the king summoned parliament to secure the annulment, thus bringing together those who wanted reform but disagreed on the form it should take; it became known as the Reformation Parliament. There were common lawyers who resented the privileges of the clergy to summon laity to their courts, and there were those who had been influenced by Lutheran evangelicalism and were hostile to the theology of Rome; Thomas Cromwell was both. Cromwell was a lawyer and a member of Parliament—a Protestant who saw how Parliament could be used to advance the Royal Supremacy, which Henry wanted, and to further Protestant beliefs and practices Cromwell and his friends wanted. The breaking of the power of Rome proceeded little by little, starting in 1531. The Act in Restraint of Appeals, drafted by Cromwell, declared that clergy recognize Henry as the “sole protector and Supreme Head of the Church and clergy of England.” This declared England an independent country in every respect. Meanwhile, having taken Anne to France on a pre-nuptial honeymoon, Henry married her in Westminster Abbey in January 1533. Henry maintained a strong preference for traditional Catholic practices and, during his reign, Protestant reformers were unable to make many changes to the practices of the Church of England. Indeed, this part of Henry’s reign saw trials for heresy of Protestants, as well as Roman Catholics. The Reformation under Edward VI’s Reign When Henry died in 1547, his nine-year-old son, Edward VI, inherited the throne. Under King Edward VI more Protestant-influenced forms of worship were adopted. Under the leadership of the Archbishop of Canterbury, Thomas Cranmer, a more radical reformation proceeded. Cranmer introduced a series of religious reforms that revolutionized the English church from one that—while rejecting papal supremacy—remained essentially Catholic to one that was institutionally Protestant. All images in churches were to be dismantled. Stained glass, shrines, and statues were defaced or destroyed. Crosses were removed, and bells were taken down. Vestments (special uniforms for clergy) were prohibited and either burned or sold. Chalices were melted down or sold. The requirement of the clergy to be celibate was lifted. Clergy could now marry. Sacred processions were banned, and ashes and palms were prohibited. A new pattern of worship was set out in the Book of Common Prayer (1549 and 1552). These were based on the older liturgy (the order of worship) but influenced by Protestant principles. Cranmer’s formulation of the reformed religion effectively abolished the mass, finally removing from the communion service of any notion of the real presence of God in the bread and the wine. The publication of Cranmer’s revised prayer book in 1552, supported by a second Act of Uniformity, marked the arrival of the English Church at Protestantism. The prayer book of 1552 remains the foundation of the Church of England’s services. However, Cranmer was unable to implement all these reforms once it became clear in the spring of 1553 that King Edward, upon whom the whole Reformation in England depended, was dying. Catholic Restoration From 1553, under the reign of Henry’s Roman Catholic daughter, Mary I, the Reformation legislation was repealed, and Mary sought to achieve reunion with Rome. Her first Act of Parliament was to retroactively validate Henry’s marriage to her mother and so legitimize her claim to the throne. After 1555, the initial reconciling tone of the regime began to harden. The medieval heresy laws were restored and 283 Protestants were burned at the stake for heresy (thus earning the queen the name “Bloody Mary” among her Protestant subjects). Full restoration of the Catholic faith in England to its pre-Reformation state would take time. Consequently, Protestants secretly ministering to underground congregations were planning for a long haul, a ministry of survival. However, Mary died in November 1558, childless and without having made provision for a Catholic to succeed her, which undid her work to restore the Catholic Church in England. Elizabeth I Following Mary’s death, her half-sister Elizabeth inherited the throne. One of the most important concerns during Elizabeth’s early reign was religion. Elizabeth could not be Catholic, as that church considered her illegitimate, being born of Anne Boleyn. At the same time, she had observed the turmoil brought about by Edward’s introduction of radical Protestant reforms. Communion with the Catholic Church was again severed by Elizabeth. Chiefly she supported her father’s idea of reforming the church, but she made some minor adjustments. In this way, Elizabeth and her advisors aimed at a church that included most opinions. Two groups were excluded in Elizabeth’s Church of England. Roman Catholics who remained loyal to the Pope were not tolerated. They were, in fact, regarded as traitors because the Pope had refused to accept Elizabeth as Queen of England. Roman Catholics were given the hard choice of being loyal either to their church or their country. For some priests it meant life on the run, and in some cases death for treason. The other group not tolerated were people who wanted reform to go much further, and who finally gave up on the Church of England. They could no longer see it as a true church. They believed it had refused to obey the Bible, so they formed small groups of believers outside the church. One of the main groups that formed during this time was the Puritans, whose beliefs were inspired by Calvinism. The government responded with imprisonment and exile to try to crush these “separatists.” Attributions Title Image https://commons.wikimedia.org/wiki/File:HenryAndAnneBoleynPortraits.jpg Double portrait photo made using File:Hans Holbein, the Younger, Around 1497-1543 - Portrait of Henry VIII of England Dancingtudorqueen, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons Adapted from: https://courses.lumenlearning.com/boundless-worldhistory/chapter/protestantism/
oercommons
2025-03-18T00:35:08.003857
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87866/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87867/overview
French Wars of Religion Overview French Wars of Religion In France, the Protestant Reformation resulted in a civil war between Protestants and Roman Catholics, known as the French Wars of Religion. These wars occurred within the context of the political and religious conflicts that were part of the Protestant Reformation. These wars generated political instability in France and advances and well as reverses in the acquisition of limited religious liberties by French Protestants known as Huguenots. Learning Objectives - Discuss the topic of religious conflict as a result of the Reformation: the wars of religion in France from 1562–98. Key Terms / Key Concepts Edict of Nantes: a grant of limited religious liberty to Huguenots in predominantly Catholic France by Henry IV of France on April 13, 1598, within the context of the Protestant Reformation Huguenots: members of the Protestant Reformed Church of France during the 16th and 17th centuries; inspired by the writings of John Calvin Real Presence: a term used in various Christian traditions to express belief that in the Eucharist Jesus Christ is really present in what was previously just bread and wine, and not merely present in symbol St. Bartholomew’s Day Massacre - a series of assassinations targeting Huguenots, along with Catholic mob violence against Huguenots in August 1572, occuring in the context of the French Wars of Religion War of the Three Henrys (1587-1589) - war over which of three candidates would ascend to the French throne, instigated by Spain within the context of the political and religious conflicts of the Protestant Reformation French Wars of Religion The French Wars of Religion (1562 – 1598) is the name of a period of civil infighting and military operations, primarily between French Catholics and Protestants (Huguenots). The conflict involved factional disputes between the aristocratic houses of France, such as the House of Bourbon and the House of Guise, and both sides received assistance from foreign sources. The exact number of wars and their respective dates are the subject of continued debate by historians. Some assert that the Edict of Nantes in 1598 concluded the wars; however, a resurgence of rebellious activity following this edict, which leads some to believe the Peace of Alais in 1629 is the actual conclusion. However, the Massacre of Vassy in 1562 is agreed to have begun the Wars of Religion; up to a hundred Huguenots were killed in this massacre. During the wars, complex diplomatic negotiations and agreements of peace were followed by renewed conflict and power struggles. Between 2,000,000 and 4,000,000 people were killed as a result of war, famine, and disease. And at the conclusion of the conflict in 1598, Huguenots were granted substantial rights and freedoms by the Edict of Nantes, though it did not end hostility towards them. The wars weakened the authority of the monarchy, already fragile under the rule of Francis II and then Charles IX, though the monarchy later reaffirmed its role under Henry IV. Introduction of Protestantism Protestant ideas were first introduced to France during the reign of Francis I (1515 – 1547) in the form of Lutheranism—the teachings of Martin Luther—and circulated unimpeded for more than a year around Paris. Although Francis firmly opposed heresy, the difficulty was initially in recognizing what constituted it because Catholic doctrine and definition of orthodox belief was unclear. Francis I tried to steer a middle course as an alternative to the developing religious schism in France. Calvinism, a form of Protestant religion, was introduced by John Calvin, who was born in Noyon, Picardy, in 1509, and fled France in 1536 after the Affair of the Placards. Calvinism in particular appears to have developed with large support from the nobility. It is believed to have started with Louis Bourbon, Prince of Condé, who, while returning home to France from a military campaign, passed through Geneva, Switzerland, and heard a sermon by a Calvinist preacher. Later, Louis Bourbon would become a major figure among the Huguenots of France. In 1560, Jeanne d’Albret, Queen regnant of Navarre, converted to Calvinism possibly due to the influence of Theodore de Beze. She later married Antoine de Bourbon, and their son Henry of Navarre would be a leader among the Huguenots. Affair of the Placards Francis I continued his policy of seeking a middle course in the religious rift in France until an incident called the Affair of the Placards. The Affair of the Placards began in 1534 when Protestants started putting up anti-Catholic posters. The posters were extreme in their anti-Catholic content—specifically, the absolute rejection of the Catholic doctrine of “Real Presence.” Protestantism became identified as “a religion of rebels,” helping the Catholic Church to more easily define Protestantism as heresy. In the wake of the posters, the French monarchy took a harder stand against the protesters. Francis I had been severely criticized for his initial tolerance towards Protestants, and after the Affair was encouraged to repress them. Tensions Mount King Francis I died on March 31, 1547. He was succeeded to the throne by his son Henry II. Henry II continued the harsh religious policy that his father had followed during the last years of his reign. In 1551, Henry issued the Edict of Châteaubriant, which sharply curtailed Protestant rights to worship, assemble, or even discuss religion at work, in the fields, or over a meal. During the 1550s, an organized influx of Calvinist preachers from Geneva and elsewhere succeeded in setting up hundreds of underground Calvinist congregations in France. This underground Calvinist preaching (which was also seen in the Netherlands and Scotland) allowed for the formation of covert alliances with members of the nobility and quickly led to more direct action to gain political and religious control. As the Huguenots gained influence and displayed their faith more openly, Roman Catholic hostility grew toward them, even though the French crown offered increasingly liberal political concessions and edicts of toleration. However, these measures disguised the growing tensions between Protestants and Catholics. The Eight Wars of Religion These tensions spurred eight civil wars, interrupted by periods of relative calm, between 1562 and 1598. With each break in peace, the Huguenots’ trust in the Catholic throne diminished; the violence became more severe; and Protestant demands became grander. A lasting cessation of open hostility did not occur until 1598. The wars gradually took on a dynastic character, developing into an extended feud between the Houses of Bourbon and Guise, both of which—in addition to holding rival religious views—staked a claim to the French throne. The crown, occupied by the House of Valois, generally supported the Catholic side, but on occasion switched over to the Protestant cause whenever it was politically expedient. St. Bartholomew's Day Massacre One of the most infamous events of the Wars of Religion was the St. Bartholomew's Day Massacre of 1572, when Catholics killed thousands of Huguenots in Paris. The massacre began on the night of August 23, 1572 (the eve of the feast of Bartholomew the Apostle) and two days after the attempted assassination of Admiral Gaspard de Coligny—the military and political leader of the Huguenots. The king ordered the killing of a group of Huguenots leaders, including Coligny, and the slaughter spread throughout Paris and beyond. The exact number of fatalities throughout the country is not known, but estimates are that between about 2,000 and 3,000 Protestants were killed in Paris, and between 3,000 and 7,000 more in the French provinces. Similar massacres took place in other towns in the weeks following. By September 17, almost 25,000 Protestants had been massacred in Paris alone. Outside of Paris, the killings continued until October 3. An amnesty granted in 1573 pardoned the perpetrators. The massacre also marked a turning point in the French Wars of Religion. The Huguenots political movement was crippled by the loss of many of its prominent aristocratic leaders, as well as many re-conversions by the rank and file, and those who remained were increasingly radicalized. War of the Three Henrys The War of the Three Henrys (1587 – 1589) was the eighth and final conflict in the series of civil wars in France known as the Wars of Religion. It was a three-way war fought between: - King Henry III of France, a son of Henry II who had no children of his own, supported by the royalists and the politiques; - King Henry of Navarre, leader of the Huguenots, Henry III’s cousin, and heir-presumptive to the French throne, supported by Elizabeth I of England and the Protestant princes of Germany. - and Henry of Lorraine, Duke of Guise, leader of the Catholic League, funded and supported by Philip II of Spain. The war began when the Catholic League convinced King Henry III to issue an edict outlawing Protestantism and annulling Henry of Navarre’s right to the throne. For the first part of the war, the royalists and the Catholic League were uneasy allies against their common enemy, the Huguenots. Henry of Navarre sought foreign aid from the German princes and Elizabeth I of England. Henry III successfully prevented a union of the German and Swiss armies. The Swiss were his allies and had come to invade France to free him from subjection, but Henry III insisted that their invasion was not in his favor, but against him, forcing them to return home. In Paris, the glory of repelling the German and Swiss Protestants all fell to Henry, Duke of Guise. The king’s actions were viewed with contempt. People thought that the king had invited the Swiss to invade, paid them for coming, and sent them back again. The king, who had really performed the decisive part in the campaign, and expected to be honored for it, was astounded that public voice should thus declare against him. Open war erupted between the royalists and the Catholic League. Charles, Duke of Mayenne—Guise’s younger brother—then took over the leadership of the league, after Henry III assassinated Henry of Guise. Afterwards, it seemed that Henry III could not possibly resist his enemies. His power was effectively limited to Blois, Tours, and the surrounding districts. In these dark times the King of France finally reached out to his cousin and heir, the King of Navarre. Henry III declared that he would no longer allow Protestants to be called heretics, while the Protestants revived the strict principles of royalty and divine right. With their Roman Catholic enemies, ultra-Catholic and anti-royalist doctrines were closely associated, so on the side of the two kings the principles of tolerance and royalism were united. In July 1589, in the royal camp at Saint-Cloud, a Dominican monk named Jacques Clément gained an audience with Henry III and drove a long knife into his spleen. Clément was killed on the spot, taking with him the information of who, if anyone, had hired him. On his deathbed, Henry III called for Henry of Navarre, and begged him, in the name of statecraft, to become a Catholic, citing the brutal warfare that would ensue if he refused. He named Henry Navarre as his heir, and Navarre became Henry IV. Edict of Nantes Fighting continued between Henry IV and the Catholic League for almost a decade. The warfare was finally quelled in 1598 after Henry IV had recanted Protestantism in favor of Roman Catholicism, issued as the Edict of Nantes. The edict established Catholicism as the state religion of France but granted the Protestants equality with Catholics under the throne and a degree of religious and political freedom within their domains. The edict simultaneously protected Catholic interests by discouraging the founding of new Protestant churches in Catholic-controlled regions. With the proclamation of the Edict of Nantes, and the subsequent protection of Huguenot rights, pressure on the Protestants to leave France abated. In offering general freedom of conscience to individuals, the edict gave many specific concessions to the Protestants, such as amnesty and the reinstatement of their civil rights, including the right to work in any field or for the state and to bring grievances directly to the king. This marked the end of the religious wars that had afflicted France during the second half of the 16th century. Attributions Title Image The St. Bartholomew's Day massacre circa 1572. Attribution: Frans Hogenberg, Public domain, via Wikimedia Commons. Attribution: Provided by: Wikipedia. Located at: https://commons.wikimedia.org/wiki/File:Frans_Hogenberg,_The_St._Bartholomew%27s_Day_massacre,_circa_1572.jpg. License: CC BY-SA: Attribution-ShareAlike. Licenses and Attributions Adapted from: - https://courses.lumenlearning.com/boundless-worldhistory/chapter/protestantism/ - https://creativecommons.org/licenses/by-sa/4.0/ CC LICENSED CONTENT, SHARED PREVIOUSLY - Curation and Revision. Provided by: Boundless.com. License: CC BY-SA: Attribution-ShareAlike CC LICENSED CONTENT, SPECIFIC ATTRIBUTION History of Protestantism. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike Protestant Reformation. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike John Calvin. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike Council of Trent. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike Indulgence. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike Council of Trent Pasquale Cati. Provided by: Wikipedia. License: Public Domain: No Known Copyright Ninety-five Theses. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike John_Calvin_by_Holbein.png. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Calvinism#/media/File:John_Calvin_by_Holbein.png. License: CC BY-SA: Attribution-ShareAlike Huguenot. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike French Wars of Religion. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike Massacre of Vassy. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike Edict of Nantes. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike Real Presence of Christ in the Eucharist. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike Reformation and Division, 1530u20131558. License: Public Domain: No Known Copyright. License terms: Standard YouTube license Francois Dubois 001. Provided by: Wikipedia. License: Public Domain: No Known Copyright Witches. Provided by: Wikipedia. License: Public Domain: No Known Copyright Matteson Examination of a Witch. Provided by: Wikipedia. License: Public Domain: No Known Copyright
oercommons
2025-03-18T00:35:08.051031
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87867/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87868/overview
English Civil War Overview Background to the English Civil War England in the early modern period was a region of intense political, religious, cultural, and social divisions. One of the most significant difficulties for the country was the newly established Protestantism of Henry VIII that created many internal divisions. These divisions coupled with a newly growing group with political power, brought intensity to the conflicts throughout the period. With the rise of the Stuart Dynasty, a new king from Scotland brought new divisions. These deep fissures would eventually crack, causing the English Civil War. In many ways, the English Civil War can be seen as an extension of the Thirty Years War that ravaged Europe during the 17th century. The result was a complete change in England’s political and cultural landscape. Learning Objectives - Analyze how the Tudor and the Stuart Dynasties affected the political and economic worlds of England. - Evaluate the impact of the English Civil War on English culture and society. - Evaluate how the English Civil War related to the other Protestant Reformation problems of the period. Key Terms / Key Concepts Gunpowder Plot: a failed assassination attempt in 1605 against King James I of England and VI of Scotland by a group of provincial English Catholics led by Robert Catesby; a plan to blow up the House of Lords during the State Opening of England’s Parliament on November 5, 1605 King James I: Stuart King of England after Queen Elizabeth I, and King of Scotland, during the early seventeenth century England Following the Tudor Dynasty When Queen Elizabeth I died without an heir, James VI, her cousin, and King of Scots, succeeded her to the throne of England as King James I in 1603. This united Scotland and England under one monarch. He was the first of the Stuart dynasty to rule Scotland and England. He, and his son and successor, Charles I of England, reigned England during a period in which there were escalating conflicts with the English Parliament. One of the key problems that the new king James faced, was the growth of a middle class in England that was powerful enough to have political power. The middle class growth that arose from trade and mercantilism, had enough capital that they could be seated in Parliament. This meant that James’ political fortunes were linked to his success in getting the middle class Protestants to follow his ideas. To make matters more difficult for James, he was a Protestant yet he was not a member of the Church of England. This would lead to deeper divisions between James and the Anglican middle class in Parliament. James I and the English Parliament James developed his philosophy about the relationship between monarch and parliament in Scotland, and never reconciled himself to the independent stance of the English Parliament and its unwillingness to bow readily to his policies. It was essential that both the King and Parliament understood their relationship in the same manner. Yet, this goal fell short under the new king. James I believed that he owed his superior authority to God-given right, while Parliament believed the king ruled by contract (an unwritten one, yet fully binding) and that its own rights were equal to those of the king. This set King James I and Parliament on a collision course. One of the central divisions was the position of political power in England. On the eve of the state opening of the parliamentary session on November 5, 1605, a soldier called Guy Fawkes was discovered in the cellars of the parliament buildings guarding about twenty barrels of gunpowder with which he intended to blow up Parliament House the following day. A Catholic conspiracy led by Robert Catesby, the Gunpowder Plot, as it quickly became known, had in fact been discovered in advance of Fawkes’s arrest and deliberately allowed to mature in order to catch the culprits red-handed and the plotters unawares. By the 1620s, events on the continent had escalated anti-Catholic sentiment in England. A conflict had broken out between the Catholic Holy Roman Empire and the Protestant Bohemians, who had deposed the emperor and elected James’s son-in-law, Frederick V, triggering the Thirty Years’ War. James reluctantly summoned Parliament to raise the funds necessary to assist his daughter Elizabeth and Frederick, who had been ousted from Prague by Emperor Ferdinand II in 1620. Parliament's House of Commons granted subsidies inadequate to finance serious military operations in aid of Frederick, but called for a war directly against Spain. In response to these measures, James flatly told them not to interfere in matters of royal prerogative and dissolved Parliament. The failed attempt to marry Prince Charles with the Catholic Spanish princess Maria, which both the Parliament and the public strongly opposed, was followed by even stronger anti-Catholic sentiment in the Commons that was finally echoed in court. The outcome of the Parliament of 1624 was ambiguous; James still refused to declare war, but Charles believed the Commons had committed themselves to finance a war against Spain, a position that contributed to his problems with Parliament. Charles I & Parliament King James I's reign proved fraught with tension, despite its successes in establishing English colonies in the New World. Time and again, the new king had butted heads with Parliament. But under James' successor and heir, King Charles I, England would plunge into chaos and discontent that culminated in civil war, and the new monarch's head on a chopping block. Key Terms / Key Concepts Charles I: Stuart king of England during the first half of the seventeenth century Thirty Years’ War: a series of wars in Central Europe between 1618 and 1648 Long Parliament: an English Parliament that lasted from 1640 until 1660 habeas corpus: a legal action whereby a person can report an unlawful detention or imprisonment before a court, usually through a prison official Tonnage and Poundage: certain duties and taxes on every tun (cask) of imported wine, and on every pound weight of merchandise exported or imported Petition of Right: a major English constitutional document that sets out specific liberties and rights of the subjects that the king is prohibited from infringing eleven years’ tyranny: the period from 1629 to 1640, when King Charles I of England, Scotland, and Ireland ruled without accountability to Parliament Charles I and the English Parliament In 1625, Charles married French princess Henrietta Maria. Many members of the lower house of Parliament were opposed to the king’s marriage to a Roman Catholic. Although Charles told Parliament that he would not relax religious restrictions, he promised King Louis XIII of France, that he would do exactly that when Charles married his Catholic daughter, Henrietta Maria. The treaty placed an English naval force under French control with the purpose of suppressing the Protestant Huguenots at La Rochelle. Charles was crowned in 1626 at Westminster Abbey without his Catholic wife at his side because she refused to participate in a Protestant religious ceremony. In January 1629, Charles opened the second session of the English Parliament. Members of the House of Commons began to voice opposition to Charles’s policies. Many members of Parliament viewed the imposition of taxes as a breach of the Petition of Right. When Charles ordered a parliamentary adjournment on March 2, members held the Speaker down in his chair so that the ending of the session could be delayed long enough for various resolutions, including Anti-Catholic and tax-regulating laws. The provocation was too much for Charles, who dissolved Parliament. Shortly after the prorogation, without the means in the foreseeable future to raise funds from Parliament for a European war, Charles made peace with France and Spain. The following eleven years, during which Charles ruled England without a Parliament, are referred to as the “personal rule” or the eleven years' tyranny. The Long Parliament, which assembled in the aftermath of the personal rule, started in 1640 and quickly began proceedings to impeach the king’s leading counselors for high treason. To prevent the king from dissolving it at will, Parliament passed the Triennial Act, which required Parliament to be summoned at least once every three years, and permitted the Lord Keeper of the Great Seal and twelve peers to summon Parliament if the king failed to do so. The tensions between Parliament and Charles began to escalate and would eventually erupt in war. Charles I and the Power to Tax Charles I’s attempt to impose taxes not authorized by Parliament contributed to the ongoing conflict between the king and Parliament and eventually resulted in the passing of the 1628 Petition of Right. Charles I of England and the English Parliament Charles demanded over £700,000 to assist in helping fight the European war. The House of Commons refused and instead passed two bills granting him only £112,000. In addition, rather than renewing the customs due from Tonnage and Poundage for the entire life of the monarch, which was traditional, the Commons only voted them in for one year. Because of this, the House of Lords rejected the bill, leaving Charles without any money to provide to the war effort. After the Commons continued to refuse to provide money, Charles dissolved Parliament. By 1627, with England still at war, Charles decided to raise “forced loans,” or taxes not authorized by Parliament. Anyone who refused to pay would be imprisoned without trial, and if they resisted, would be sent before the Privy Council. Although the judiciary initially refused to endorse these loans, they succumbed to pressure. While Charles continued to demand the loans, more and more wealthy landowners refused to pay, reducing the income from the loans and necessitating a new Parliament being called in 1627. Martial Law To cope with the ongoing war situation, Charles had introduced martial law to large swathes of the country, and in 1627 to the entire nation. Crucially, martial law as then understood was not a form of substantive law, but instead a suspension of the rule of law. It was the replacement of normal statutes with a law based on the whims of the local military commander. However, Charles decided that the only way to prosecute the war was to again ask Parliament for money, and Parliament assembled in 1628. As a result, a series of Parliamentary declarations known as the Resolutions were prepared after tense debates. They held that imprisonment was illegal, except under law; habeas corpus should be granted to anyone, whether they are imprisoned by the king or the Privy Council; defendants could not be remanded in custody until the crime they were charged with was shown; and non-parliamentary taxation such as the forced loans was illegal. The Resolutions were unanimously accepted by the Commons in April, but they met a mixed reception at the House of Lords, and Charles refused to accept them. Tensions between Parliament and Charles increased throughout the 1628 period. Parliament debated on Resolutions, but these ultimately failed because of the tensions between the executive and legislative branches over who had more political power. Leading ultimately to the Petition of Rights. This measure was the basis of the constitutional monarchy, where the king’s power is checked by Parliament. Charles was not happy about the passing of this bill. With increasing zeal, he attacked the bill, unaware that his unpopularity escalated dangerously. Having dissolved Parliament in 1627 after it did not meet the king’s requirements and threatened his political allies, but unable to raise money without it, Charles assembled a new one in 1628. The new Parliament drew up the Petition of Right, and Charles accepted it as a concession in order to obtain his subsidy. The Petition did not grant him the right of tonnage and poundage, which Charles had been collecting without parliamentary authorization since 1625. Charles I avoided calling a Parliament for the next decade, a period known as the “personal rule” or the “eleven years’ tyranny.” During this period, Charles’s lack of money determined policies. First and foremost, to avoid Parliament, the king needed to avoid war. Charles made peace with France and Spain, effectively ending England’s involvement in the Thirty Years' War. Charles finally bowed to pressure and summoned another English Parliament in November 1640. Known as the Long Parliament, it proved even more hostile to Charles than its predecessor and passed a law that stated that a new Parliament should convene at least once every three years—without the king’s summons, if necessary. Other laws passed by the Parliament made it illegal for the king to impose taxes without parliamentary consent and later gave Parliament control over the king’s ministers. Finally, the Parliament passed a law forbidding the king to dissolve it without its consent, even if the three years were up. Charles and his supporters continued to resent Parliament’s demands, while Parliamentarians continued to suspect Charles of wanting to impose Episcopalianism and unchallenged royal authority by military force. Within months, the Irish Catholics, fearing a resurgence of Protestant power, struck first, and all of Ireland soon descended into chaos. In early January 1642, accompanied by 400 soldiers, Charles attempted to arrest five members of the House of Commons on a charge of treason but failed to do so. A few days after this failure, fearing for the safety of his family and retinue, Charles left the London area for the north of the country. Further negotiations by frequent correspondence between the king and the Long Parliament proved fruitless. As the summer progressed, cities and towns declared their sympathies for one faction or the other. The English Civil War and Aftermath Although the English Civil War began in 1642, it was the second war within the English Civil War that proved the critical turning point in English History. In 1648, the Parliamentarians (Roundheads) claimed victory against the Royalist Cavaliers. Parliament became controlled largely by the Rump Parliament comprised primarily of extremists who supported Parliament over the king. Among the most important, if also unlikely, figures to arise from the chaos was Oliver Cromwell--an extremist himself renowned for his position as second-in-command of the New Model Army. With the Cavaliers' defeat, and Parliament in the hands of extremists, King Charles I's fate was sealed by the end of 1648. In January 1649, England executed its king as a traitor and established a commonwealth. Key Terms / Key Concepts English Civil War: a series of three major military and political wars from 1642-51 between the Royalist "Cavaliers" and the Parliamentary forces, the "Roundheads" Roundheads: the name given to the supporters of the Parliament of England during the English Civil War Cavaliers: a name first used by Roundheads as a term of abuse for the wealthier male Royalist supporters of King Charles I and his son Charles II of England during the English Civil War, the Interregnum, and the Restoration The Trial of Charles I: in January 1649, a poorly-constructed trial l with little legal foundation used to justify King Charles' execution Oliver Cromwell: military commander in the New Model Army and extreme supporter of the Parliamentarians who helped establish the Commonwealth of England after the execution of Charles I New Model Army: an army formed in 1645 by the Parliamentarians in the English Civil War and disbanded in 1660 after the Restoration Rump Parliament: members of English Parliament in late 1648-1649 who strongly supported the execution of King Charles I on charges of treason; among them was Oliver Cromwell Commonwealth of England: period in English history (1649-1660) in which England, Scotland, and Ireland were ruled by Oliver Cromwell and his successor The English Civil War Overview The English Civil War erupted over Charles' policies in 1642. Very quickly, it developed into a series of armed conflicts and political machinations between Parliamentarians (Roundheads) and Royalists (Cavaliers). The first war, (1642–1646) and second (1648–1649) wars pitted the supporters of King Charles I against the supporters of the Long Parliament, while the third (1649–1651) saw fighting between supporters of King Charles II (Charles I’s son) and supporters of the Rump Parliament. The war ended with the Parliamentarian victory at the Battle of Worcester on September 3, 1651. The overall outcome of the war was threefold: the trial of Charles I, the exile of Charles II, and the replacement of English monarchy with, at first, the Commonwealth of England (1649–1653), and then the Protectorate (1653–1659) under Oliver Cromwell’s personal rule. The monopoly of the Church of England on Christian worship in England ended with the victors consolidating the established Protestantism in Ireland. Constitutionally, the wars established the precedent that an English monarch cannot govern without Parliament’s consent. The Trial and Execution of Charles I Charles I was not entirely ignorant of the growing threats the Parliamentarian forces posed to him and his family. Several times, he attempted to escape and moved from city to city for safety. In 1645, he sent his son, Charles II, to France where his mother was waiting for him. For King Charles I, though, escape from England proved far more challenging and his attempts to flee the country were thwarted. The second major war within the English Civil War ended in a victory for the Parliamentarians, and Oliver Cromwell's rising star, in 1648. That victory, and the fact that members of the Rump Parliament were now largely in charge of executive and legislative decisions, led to the decision for the trial of Charles I. On January 1, 1649 the Rump Parliament charged King Charles I of committing acts of tyrannical violence against his own subjects. Therefore, they accused him a tyrant and also guilty of treason. The declaration polarized politicians. A special court, the High Court of Justice, was established for the purpose of trying the king. As the proceedings began, though, many of the members of the court found the proceedings too extreme and controversial and resigned. Those who remained were members loyal to the Parliamentarians. The trial began on January 20, 1649 and lasted six days. On the sixth day, the members of the court found Charles Stuart guilty of being a "tyrant, traitor, murderer and enemy of the Commonwealth of England." The next day, Charles was led from court to await his execution. On Tuesday, January 30, Charles prayed with Bishop Juxon until 10:00 in the morning. He was then dressed in an extra shirt to save him from shivering from the frigid weather. After three hours of waiting in his chambers, Charles and the bishop walked to Whitehall, where a low-lying chopping block was assembled. Charles reportedly prayed, and said, "I go from a corruptible crown to an incorruptible crown; where no disturbance can be; no disturbance in the world." A few minutes later, the execution severed Charles' head with a single ax blow before an assembled crowd. After the death of Charles, Oliver Cromwell established the Commonwealth of England and later became its Lord Protector. Oliver Cromwell’s Rise Oliver Cromwell was relatively obscure for the first forty years of his life. He was an intensely religious man (an Independent Puritan) who entered the English Civil War on the side of the “Roundheads,” or Parliamentarians. Nicknamed “Old Ironsides,” he was quickly promoted from leading a single cavalry troop to being one of the principal commanders of the New Model Army, playing an important role in the defeat of the royalist forces. Cromwell was one of the signatories of King Charles I’s death warrant in 1649, and he dominated the short-lived Commonwealth of England as a member of the Rump Parliament (1649–1653). He was selected to take command of the English campaign in Ireland in 1649–1650. His forces defeated the Confederate and Royalist coalition in Ireland and occupied the country, bringing an end to the Irish Confederate Wars. During this period, a series of laws were passed against Roman Catholics (a significant minority in England and Scotland but the vast majority in Ireland), and a substantial amount of their land was confiscated. Cromwell also led a campaign against the Scottish army between 1650 and 1651. In April 1653, he dismissed the Rump Parliament by force, setting up a short-lived nominated assembly known as Barebone’s Parliament, before being invited by his fellow leaders to rule as Lord Protector of England (which included Wales at the time), Scotland, and Ireland from December 1653. As a ruler, he executed an aggressive and effective foreign policy. He died from natural causes in 1658 and the Royalists returned to power in 1660, and they had his corpse dug up, hung in chains, and beheaded. Cromwell is one of the most controversial figures in the history of the British Isles, considered a regicidal dictator, a military dictator, and a hero of liberty. However, his measures against Catholics in Scotland and Ireland have been characterized as genocidal or near-genocidal, and in Ireland, his record is harshly criticized. The English Protectorate Despite the revolutionary nature of the government during the Protectorate, Cromwell’s regime was marked by an aggressive foreign policy, no drastic reforms at home, and difficult relations with Parliament, which in the end made it increasingly similar to monarchy. The Commonwealth of England The Commonwealth of England was the period when England, later along with Ireland and Scotland, was ruled as a republic following the end of the Second English Civil War and the trial and execution of Charles I (1649). The republic’s existence was declared by the Rump Parliament on May 19, 1649. Power in the early Commonwealth was vested primarily in the Parliament and a Council of State. During this period, fighting continued, particularly in Ireland and Scotland, between the parliamentary forces and those opposed to them, as part of what is now referred to as the Third English Civil War. In 1653, after the forcible dissolution of the Rump Parliament, Oliver Cromwell was declared Lord Protector of a united Commonwealth of England, Scotland, and Ireland under the terms of the period now usually known as the Protectorate. The term “Commonwealth” is sometimes used for the whole of 1649 to 1660, although for other historians, the use of the term is limited to the years prior to Cromwell’s formal assumption of power in 1653. The Protectorate The Protectorate was the period during the Commonwealth when England (which at that time included Wales), Ireland, and Scotland were governed by a Lord Protector. The Protectorate began in 1653 when, following the dissolution of the Rump Parliament and then Barebone’s Parliament, Oliver Cromwell was appointed Lord Protector of the Commonwealth under the terms of the Instrument of Government. Cromwell had two key objectives as Lord Protector. The first was “healing and settling” the nation after the chaos of the civil wars and the regicide. The social priorities did not, despite the revolutionary nature of the government, include any meaningful attempt to reform the social order. He was also careful in the way he approached overseas colonies. England’s American colonies in this period consisted of the New England Confederation, the Providence Plantation, the Virginia Colony, and the Maryland Colony. Cromwell soon secured the submission of these, but largely left them to their own affairs. His second objective was spiritual and moral reform. As a very religious man (Independent Puritan), he aimed to restore liberty of conscience and promote both outward and inward godliness throughout England. The latter translated into rigid religious laws (e.g., compulsory church attendance). The first Protectorate parliament met in September 1654, and after some initial gestures approving appointments previously made by Cromwell, began to work on a moderate program of constitutional reform. Rather than opposing Parliament’s bill, Cromwell dissolved them in January 1655. After a royalist uprising led by Sir John Penruddock, Cromwell divided England into military districts ruled by Army Major-Generals who answered only to him. The fifteen major generals and deputy major generals—called “godly governors”—were central not only to national security, but also to Cromwell’s moral crusade. However, the major-generals lasted less than a year. Cromwell’s failure to support his men, by sacrificing them to his opponents, caused their demise. Their activities between November 1655 and September 1656 had, nonetheless, reopened the wounds of the 1640s and deepened antipathies to the regime. During this period Cromwell also faced challenges in foreign policy. The First Anglo-Dutch War, which had broken out in 1652, against the Dutch Republic, was eventually won in 1654. Having negotiated peace with the Dutch, Cromwell proceeded to engage the Spanish in warfare. This involved secret preparations for an attack on the Spanish colonies in the Caribbean and resulted in the invasion of Jamaica, which then became an English colony. The Lord Protector also became aware of the contribution the Jewish community made to the economic success of Holland, then England’s leading commercial rival. This led to his encouraging Jews to return to England, 350 years after their banishment by Edward I, in the hope that they would help speed up the recovery of the country after the disruption of the English Civil War. In 1657, Oliver Cromwell rejected the offer of the Crown presented to him by Parliament and was ceremonially re-installed as Lord Protector, this time with greater powers than had previously been granted him under this title. Most notably, however, the office of Lord Protector was still not to become hereditary, though Cromwell was now able to nominate his own successor. Cromwell’s new rights and powers were laid out in the Humble Petition and Advice, a legislative instrument that replaced the Instrument of Government. Despite failing to restore the Crown, this new constitution did set up many of the vestiges of the ancient constitution, including a house of life peers (in place of the House of Lords). In the Humble Petition, it was called the “Other House,” as the Commons could not agree on a suitable name. Furthermore, Oliver Cromwell increasingly took on more of the trappings of monarchy. Cromwell's Death and Legacy Cromwell died of natural causes in 1658, and his son Richard succeeded as Lord Protector. Richard sought to expand the basis for the Protectorate beyond the army to civilians. He summoned a Parliament in 1659. However, the republicans assessed his father’s rule as “a period of tyranny and economic depression” and attacked the increasingly monarchy-like character of the Protectorate. Richard was unable to manage the Parliament and control the army. In May, a Committee of Safety was formed on the authority of the Rump Parliament, removing the Protector’s Council of State, and was in turn replaced by a new Council of State. A year later monarchy was restored. In 1661, Oliver Cromwell's body was exhumed. Royalists hung the body in chains in Tyburn, London, before throwing it into a pit and then severing the head. Cromwell's head was then stuck atop a spike outside Westminster Hall until 1685, and later sold to various owners until the mid-twentieth century. Cromwell is one of the most controversial figures in the history of the British Isles, considered a regicidal dictator or a military dictator by some and a hero of liberty by others. His measures against Catholics in Scotland and Ireland have been characterized as genocidal or near-genocidal, and in Ireland his record is harshly criticized. Following the Irish Rebellion of 1641, most of Ireland came under the control of the Irish Catholic Confederation. In early 1649, the Confederates allied with the English Royalists, who had been defeated by the Parliamentarians in the English Civil War. By May 1652, Cromwell’s Parliamentarian army had defeated the Confederate and Royalist coalition in Ireland and occupied the country—bringing an end to the Irish Confederate Wars (or Eleven Years’ War). However, guerrilla warfare continued for another year. Cromwell passed a series of Penal Laws against Roman Catholics (the vast majority of the population) and confiscated large amounts of their land. The extent to which Cromwell, who was in direct command for the first year of the campaign, was responsible for brutal atrocities in Ireland is debated to this day. Restoration of the Stuarts Over a decade after Charles I’s 1649 execution and Charles II’s 1651 escape to mainland Europe, the Stuarts were restored to the English throne by Royalists in the aftermath of the slow fall of the Protectorate. For those who had remained loyal to King Charles I, they would find a new champion in his son, King Charles II. Attributions Images from Wikimedia Commons: https://upload.wikimedia.org/wikipedia/commons/d/dd/The_Execution_of_Charles_I_of_England.jpg Text modified from Boundless: https://www.coursehero.com/study-guides/boundless-worldhistory/protestantism/ Historic Royal Places. "The Execution of Charles I." https://www.hrp.org.uk/banqueting-house/history-and-stories/the-execution-of-charles-i/ British Civil Wars, Commonwealth, and Protectorate Project: 1638-1660. "The Trial of King Charles I." http://bcw-project.org/church-and-state/the-commonwealth/trial-of-king-charles-i
oercommons
2025-03-18T00:35:08.094918
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87868/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87869/overview
The Counter Reformation Overview The Counter Reformation The Protestant Reformation prompted the Roman Catholic Church to reform itself from the top down. Learning Objectives - Discuss the religious and political developments associated with the Counter-Reformation. Key Terms / Key Concepts Council of Trent: council of the Roman Catholic Church set up in direct response to the Reformation in Trento, Italy Ignatius Loyola (1491 – 1556): the founder of the Society of Jesus (Jesuits) The Counter-Reformation The Protestant Reformation resulted in large areas of Europe defecting from the Roman Catholic Church. However, many Christians in Italy, Spain, and France, for example, remained loyal to the Catholic church. Some church reformers wished to end corrupt practices in the Roman Catholic Church, while still supporting the institution. For example, Erasmus of Rotterdam (1466 – 1536)—the Dutch Humanist—and the English Humanist Thomas More (1478 – 1532) were both vocal critics of corruption and abuses within the Roman Catholic Church, but both refused to abandon this church. In fact, the English king Henry VIII executed Thomas More on the charge of treason when More refused to reject the Pope as the head of the Church. Eventually reform did take place within the Roman Catholic Church in the 16th century as a response to the defection of so many Christians to the Protestant churches. Historians refer to these reforms as the Counter-Reformation or the Catholic Reformation. Since the Roman Catholic Church possessed a strict hierarchical structure, sweeping reform within the church could only be implemented from the very top of the hierarchy by the Pope himself. Pope Paul III (Alessandro Farnese, r. 1534 – 1549) was an unlikely reformer. He belonged to a powerful, aristocratic family in Central Italy; he had become a Cardinal in the church through nepotism since his older sister, Giulia was the mistress of Pope Alexander VI. Pope Paul III summoned the Council of Trent in 1545 to address the issue of church reform. Earlier Popes had resisted summoning such a council for fear that that a church council could limit the authority of the Papacy within the church. The Council of Trent convened for nineteen years until 1563. The members of the council debated whether to adopt the reforms proposed by Martin Luther or to affirm support for existing church practices and doctrines. In general church leaders from northern Europe at this council tended to support Luther's ideas, whereas church leaders from Spain and Italy were more conservative and wished to see no changes in practices and doctrines. In the end the conservatives were victorious in this debate. The focus of reform, therefore, in the Counter-Reformation was on ending abuse and corruption within the church rather than adopting new doctrines or practices. For example, the Council of Trent affirmed the belief in Purgatory and in papal indulgences but condemned the selling of indulgences as a fund-raising scheme. The Council also sought to purge the church of uneducated, corrupt priests by requiring priests to receive more education and training. The Counter-Reformation also witnessed a grass roots revival among Roman Catholics and renewed a sense of mission. The Society of Jesus or Jesuits was front and center in this revival. Ignatius Loyola (1491 – 1556) was the founder of this new religious order within the Roman Catholic Church. Loyola was a soldier from an aristocratic Spanish family in his youth. After a brush with death due to an injury in battle, Loyola decided to devote his life to Christ, spending up to seven hours a day in prayer. He wandered through Spain, France, and Italy, preaching and serving others. Eventually, in 1540, Pope Paul III allowed Ignatius and his followers to organize the Society of Jesus as a new order of priests. With his military background Loyola organized the Jesuits along the military lines. Loyola was the first Superior General of the order. These Jesuits committed themselves to preaching the doctrines of the Roman Catholic Church to Protestants in Europe and heathens (non-Christians) in Asia and the Western Hemisphere. The Jesuits also established schools and colleges across Europe whose curriculum was shaped by Renaissance Humanism. Protestants, however, came to view Jesuits as tools of the Devil. Spain and the Counter-Reformation By the second half of the 16th century, Spain dominated the world stage with its vast empire in Europe and the Western Hemisphere during the reign of Charles V (1516 – 1555) and his son Philip II (1555 – 1598). Philip's territories included the former empires of the Aztecs and Incas in Mexico and South America, as well as the Netherlands and lands in Italy. In 1578 he inherited the throne of Portugal and its overseas empire. The massive influx of gold and silver from the royal mines in the New World provided Philip with the financial resources to wage wars across Europe in order to advance his interests and those of Spain, which in his eyes were identical. Like his father, Charles V, Philip sought to expand the territories of the Hapsburg family and to defend Christendom and the Roman Catholic Church from the Muslim Ottoman Empire and Protestant “heretics.” Even though Philip's policies were rooted in a Medieval mindset, Philip constructed a modern, centralized bureaucracy to govern his extensive empire. Philip used the precious metals of the New World to build a large bureaucratic structure, whose officials he appointed and oversaw. Philip reportedly spent his days and nights pouring over reports from his officials, writing and dispatching letters to these officials, and conferring with his advisers. His father, Charles V, had allowed local aristocrats in various territories to administer their respective regions, but Philip appointed his own trusted men and family members to govern these same territories. Many of these appointed officials were Spaniards from Castile in central Spain where Philip II grew up. The Hapsburg Empire was therefore much more centralized under Philip than under his father. Also, unlike his father Charles, Philip II preferred to remain in Spain rather than travel frequently across his territories. He built his primary residence, the Escorial, which was both a royal palace and monastery, just outside Madrid, which under Philip II became the capital of the Spanish Empire. Madrid during Philip's reign went from being a village to being one of the largest cities in Europe with a population of 100,000. The city's population expanded rapidly as royal bureaucrats with their servants and staffs move there to be in close proximity to the king. In the administration of his kingdom, Philip's policies foreshadowed the Absolutism of the following century. In an Absolute Monarchy, political power is concentrated in the hands of the monarch, whose authority is in theory unlimited since there is no legal or institutional structures to keep this authority in check. In Spain prior to Philip's reign, the power of the king was held in check by the regional Cortes. Each region of Spain had its own Cortes, which was an elected body that represented the landed aristocracy and wealthy commoners in the towns. The monarch could not raise taxes without the approval of the Cortes. Kings also issued laws with the approval of the Cortes to obtain these new taxes. Philip, however, did not need the Cortes to secure funding for his government since he could draw revenue from the silver and gold mines of the New World. Consequently, he could simply rule by decree since winning the approval of the Cortes was no longer necessary. Traditionally, the landed aristocracy had exercised political power through participation in the Cortes, but under Philip, the landed aristocracy instead sought positions in Philip's royal bureaucracy as a way to wield political power and influence. In addition, royal bureaucrats were exempt from paying taxes. Therefore, these aristocrats had a vested interest in supporting the unlimited authority of the king and his bureaucracy. Philip II was a controversial figure in his own day. Protestants across Europe portrayed this king as a dark and evil person. Philip II reportedly laughed out loud only once in his life, when he heard the news that 30,000 Protestant men, women, and children had been slaughtered by Roman Catholics in France in the Saint Bartholomew Day Massacre. The walls of the Escorial were adorned with paintings of the sufferings of Christian martyrs since Philip supposedly took secret pleasure in these depictions of cruelty and torture. Philip reportedly murdered his own son, Charles. Protestants also propagated the “Black Legend” that portrayed the Spanish Empire under Philip as a force for evil tyranny and oppression, one that treated its many victims cruelly. During this period, many Protestants assumed that the Pope was an actual agent of Satan. The people of Spain, who were predominantly Roman Catholics, had a very different perspective. Philip II remained a very popular king in Spain due to his devotion to the Roman Catholic Church. The paintings of martyrs in the Escorial reportedly only served to inspire Philip to serve God in the face of adversity. His son, Charles, died tragically from natural causes. And in the eyes of Spaniards, Philip was a father figure, who worked tirelessly for the good of the Spanish people and the Roman Catholic Church. Attributions Title Image https://commons.wikimedia.org/wiki/File:Ignatius_Loyola.jpg Ignatius Loyola, 16th century - Anonymous, Unknown author, Public Domain, via Wikimedia Commons Adapted From: https://courses.lumenlearning.com/boundless-worldhistory/chapter/protestantism/
oercommons
2025-03-18T00:35:08.120036
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87869/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87870/overview
Catholic Spain Overview Catholic Spain Spain under the reign of the Hapsburg monarch, Philip II was the champion of the Catholic Reformation or Counter-Reformation. Learning Objectives - Discuss the religious and political developments in Spain under Philip II. Key Terms / Key Concepts the Catholic Monarchs: the joint title used in history for Queen Isabella I of Castile and King Ferdinand II of Aragon (They were both from the House of Trastámara and were second cousins descended from John I of Castile; after marriage they were given a papal dispensation to deal with consanguinity by Sixtus IV. They established the Spanish Inquisition around 1480.) Alhambra Decree: an edict issued on March 31, 1492, by the joint Catholic Monarchs of Spain (Isabella I of Castile and Ferdinand II of Aragon) ordering the expulsion of practicing Jews from the Kingdoms of Castile and Aragon, along with its territories and possessions, by July 31 of that year Consanguinity: the property of being from the same kinship as another person; the quality of being descended from the same ancestor as another person (The laws of many jurisdictions set out degrees of consanguinity in relation to prohibited sexual relations and marriage parties.) Spanish Armada: a Spanish fleet of 130 ships that sailed from A Coruña in August 1588 with the purpose of escorting an army from Flanders to invade England, with the strategic aim to overthrow Queen Elizabeth I of England and the Tudor establishment of Protestantism in England Catholic League: a major participant in the French Wars of Religion, formed by Henry I, Duke of Guise, in 1576 (It intended the eradication of Protestants—also known as Calvinists or Huguenots—out of Catholic France during the Protestant Reformation, as well as the replacement of King Henry III. Pope Sixtus V, Philip II of Spain, and the Jesuits were all supporters of this Catholic party.) Eighty Years’ War: a revolt of the Seventeen Provinces against the political and religious hegemony of Philip II of Spain, the sovereign of the Habsburg Netherlands, between 1568 and 1648, also known also as the Dutch War of Independence Morisco: a term used to refer to former Muslims who converted to Christianity, or were coerced into converting, after Spain outlawed the open practice of Islam by its Mudejar population in the early 16th century (The group was subject to systematic expulsions from Spain’s various kingdoms between 1609 and 1614, the most severe of which occurred in the eastern Kingdom of Valencia.) Catholic Spain Around 1480, decades before the start of the Protestant Reformation, Ferdinand II of Aragon and Isabella I of Castile, known as the Catholic Monarchs, established what would be known as the Spanish Inquisition. It was intended to maintain Catholic orthodoxy in their kingdoms and to replace the Medieval Inquisition, which was under Papal control. It covered Spain and all the Spanish colonies and territories, which would eventually include the Canary Islands, the Spanish Netherlands, the Kingdom of Naples, and all Spanish possessions in North, Central, and South America. People who converted to Catholicism were not subject to expulsion, but between 1480 and 1492 hundreds of those who had converted (conversos and moriscos) were accused of secretly practicing their original religion (crypto-Judaism or crypto-Islam) and arrested, imprisoned, interrogated under torture, and in some cases burned to death, in both Castile and Aragon. In 1492 Ferdinand and Isabella ordered segregation of communities to create closed quarters that became what were later called “ghettos.” They also furthered economic pressures upon Jews and other non-Christians by increasing taxes and social restrictions. In 1492 the monarchs issued a decree of expulsion of Jews, known formally as the Alhambra Decree, which gave Jews in Spain four months to either convert to Catholicism or leave Spain. Tens of thousands of Jews emigrated to other lands such as Portugal, North Africa, the Low Countries, Italy, and the Ottoman Empire. Later in 1492, Ferdinand issued a letter addressed to the Jews who had left Castile and Aragon, inviting them back to Spain if they would become Christians. The Inquisition was not definitively abolished until 1834, during the reign of Isabella II, after a period of declining influence in the preceding century. Most of the descendants of the Muslims who submitted to Christian conversion during the early periods of the Spanish and Portuguese Inquisition rather than exile were known as the Moriscos; they were later expelled from Spain after serious social upheaval, when the Inquisition was at its height. The expulsions were carried out more severely in eastern Spain (Valencia and Aragon) due to local animosity towards Muslims and Moriscos who were perceived as economic rivals; local workers saw them as cheap labor undermining their bargaining position with the landlords. Those that the Spanish Inquisition found to be secretly practicing Islam or Judaism were executed, imprisoned, or expelled. Nevertheless, all those deemed to be “New Christians” were perpetually suspected of various crimes against the Spanish state, including continued practice of Islam or Judaism. The Hapsburgs Over the 16th and 17th centuries Spain was ruled by the major branch of the Habsburg dynasty. In this period, “Spain” or “the Spains” covered the entire peninsula and were politically a confederacy comprising several nominally independent kingdoms in personal union: Aragon, Castile, León, Navarre and from 1580 Portugal. The term “Monarchia Catholica” (Catholic Monarchy) remained in use for the monarchy under the Spanish Habsburgs. However, Spain as a unified state technically came into being after the death of Charles II in 1700, the last ruler of Spain of the Habsburg dynasty. Under the Habsburgs, Spain dominated Europe politically and militarily, but experienced a gradual decline of influence in the second half of the 17th century under the later Habsburg kings. The Habsburg years were also a Spanish Golden Age of cultural efflorescence. The Global Power When Spain’s first Habsburg ruler, Charles I became king of Spain in 1516, the country became central to the dynastic struggles of Europe. After becoming king of Spain, Charles also became Holy Roman Emperor Charles V, and because of his widely scattered domains was not often in Spain. In 1519 Charles V, at the age of 19, inherited an immense realm in Europe from his ancestors. Charles' paternal grandfather Maximilian I was the Archduke of Austria and Holy Roman Emperor. Charles' paternal grandmother was Mary of Burgundy, the heiress of the Duchy of Flanders (The Netherlands and Belgium today). Maximilian married Mary in 1477 to prevent her archenemy, Louis XI of the Valois Dynasty of France, from seizing Flanders for himself. In 1496 the son of Maximilian and Mary, Philip the Handsome, married Joanna, the eldest daughter of Ferdinand of Aragon and Isabella of Castile. The marriage of Ferdinand and Isabella in 1479 had united their two kingdoms into the kingdom of Spain. The marriage of their daughter, Joanna to Philip sealed an alliance against a common enemy, the Valois Dynasty of France. Ferdinand was also the king of the island of Sicily and his family, the House of Aragon, also claimed the kingdom of Naples in Southern Italy. However, the Valois king of France, Charles VIII—son of Louis XI—had claimed Naples for himself and invaded Italy in 1495 to make good this claim. The union of Philip and Joanna resulted in the birth of the future Charles V in 1500. In 1506 after the death of Philip, Charles, age six, inherited Flanders from his father. Charles grew up in Flanders and French was his native language. Since Ferdinand had no sons and Charles was the son of Ferdinand's and Isabella's eldest daughter, Joanna, Charles became his heir apparent. In 1516, after the death of his maternal grandfather, Ferdinand, Charles inherited, at the age of 16, the thrones of Aragon and Castile, as well as the kingdom of Sicily. In 1519, Maximilian died, and Charles inherited the Duchy of Austria from his paternal grandfather. The Electors of the Holy Roman Empire then elected Charles to succeed his grandfather as Holy Roman Emperor Charles V, which was not surprising as one of the Electors was Charles' younger brother, Ferdinand, who was the king of Bohemia. As he approached the end of his life, Charles made provision for the division of the Habsburg inheritance into two parts. On the one hand was Spain, its possessions in Europe, North Africa, the Americas, and the Netherlands. On the other hand, there was Austria and the Holy Roman Empire. This was to create enormous difficulties for his son Philip II of Spain. During Charles’s reign, Spanish settlements were established in the New World. The Aztec and Inca Empires were conquered from 1519 to 1521 and 1540 to 1558, respectively. Mexico City, which became the most important colonial city, was established in 1524 and became the primary center of administration in the New World. Buenos Aires was established in 1536. New Granada (modern Colombia) was colonized in the 1530s. And Florida was colonized in the 1560s—shortly after Charles’s death. The Hapsburg family also inherited the throne of Portugal. In 1526 Charles married Isabella, the daughter of Manuel I of Portugal. Charles' sister Catherine in 1526 married John III of Portugal, the son of Manuel I. In 1578, the grandson of John III, Sebastian of Portugal died in battle in his war against the Sultan of Morocco. Since Sebastian died young without any children, the throne passed to his nearest male relative, Philip II of Spain, the son of Charles and Isabella. Philip II thus inherited the kingdom of Portugal and its far-flung overseas empire, which included territories in India, east and west Africa, and Brazil in South America. The Spanish Empire abroad became the source of Spanish wealth and power in Europe. But as precious metal shipments rapidly expanded late in the century, this contributed to the general inflation that was affecting the whole of Europe. Instead of fueling the Spanish economy, American silver made the country increasingly dependent on foreign sources of raw materials and manufactured goods. Philip II became king on Charles I’s abdication in 1555. During Philip II’s reign there were several separate state bankruptcies, which were partly the cause for the declaration of independence that created the Dutch Republic. A devout Catholic, Philip organized a huge naval expedition against Protestant England in 1588, known usually as the Spanish Armada, which was unsuccessful, mostly due to storms and grave logistical problems. Despite these problems, the growing inflow of New World silver from the mid-16th century, the justified military reputation of the Spanish infantry, and even the quick recovery of the navy from its Armada disaster made Spain the leading European power—a novel situation of which its citizens were only just becoming aware. Philip II and the Spanish Armada Extreme commitment to championing Catholicism against both Protestantism and Islam shaped both the domestic and foreign policies of Philip II, who was the most powerful European monarch in an era of religious conflict. Philip saw himself as a champion of Catholicism and faced challenges withing his realm in his quest to defend Catholicism. The Spanish Empire was not a single monarchy with one legal system but a federation of separate realms, each jealously guarding its own rights against those of the House of Habsburg. In practice, Philip often found his authority overruled by local assemblies and his word less effective than that of local lords. He also grappled with the problem of the large Morisco population in Spain, who were forcibly converted to Christianity by his predecessors. In 1569, the Morisco Revolt broke out in the southern province of Granada in defiance of attempts to suppress Moorish customs, and Philip ordered the expulsion of the Moriscos from Granada and their dispersal to other provinces. Despite its immense dominions, Spain was a country with a sparse population that yielded a limited income to the crown (in contrast to France, for example, which was much more heavily populated). Philip faced major difficulties in raising taxes, the collection of which was largely farmed out to local lords. He was able to finance his military campaigns only by taxing and exploiting the local resources of his empire. The flow of income from the New World proved vital to his militant foreign policy; nonetheless, his exchequer faced bankruptcy several times. During Philip’s reign there were five separate state bankruptcies. Philip’s foreign policies were determined by a combination of Catholic fervor and dynastic objectives. He considered himself the chief defender of Catholic Europe, both against the Ottoman Turks and against the forces of the Protestant Reformation. Philip achieved a decisive victory against the Turks at the Battle of Lepanto in 1571, with the allied fleet of the Holy League (Genoa, Venice, and the Papal States), which he had put under the command of his illegitimate brother, John of Austria. This victory ended Ottoman domination of the Mediterranean Sea. He never relented from his fight against Protestantism. which he saw as heresy, defending the Catholic faith and limiting freedom of worship within his territories. These territories included his patrimony in the Netherlands, where Protestantism had taken deep root. Following the Revolt of the Netherlands in 1568, Philip waged a campaign against Dutch secession. The plans to consolidate control of the Netherlands led to unrest, which gradually led to the Calvinist leadership of the revolt and the Eighty Years War. This conflict consumed much Spanish expenditure during the later 16th century. Philip’s commitment to restoring Catholicism in the Protestant regions of Europe resulted also in the Anglo-Spanish War (1585 – 1604). This was an intermittent conflict between the kingdoms of Spain and England that was never formally declared. The war was punctuated by widely separate battles. In 1588, the English defeated Philip’s Spanish Armada, thwarting his planned invasion of the country in order to reinstate Catholicism. But the war continued for the next sixteen years, in a complex series of struggles that included France, Ireland, and the Netherlands, which was the main battle zone. Two further Spanish armadas were sent in 1596 and 1597, but they were frustrated in their objectives mainly because of adverse weather and poor planning. The war would not end until all the leading protagonists, including Philip, had died. Philip financed the Catholic League during the French Wars of Religion (primarily fought between French Catholics and French Protestants, known as Huguenots). He directly intervened in the final phases of the wars (1589–1598). His interventions in the fighting—sending the Duke of Parma to end Henry IV’s siege of Paris in 1590—and the siege of Rouen in 1592 contributed to saving the French Catholic League’s cause against a Protestant monarchy. In 1593, Henry agreed to convert to Catholicism. Weary of war, most French Catholics switched to his side against the hardline core of the Catholic League, who were portrayed by Henry’s propagandists as puppets of the foreign Philip. By the end of 1594 certain league members were still working against Henry across the country, but all relied on the support of Spain. In 1595, therefore, Henry officially declared war on Spain, to show Catholics that Philip was using religion as a cover for an attack on the French state and to show Protestants that he had not become a puppet of Spain through his conversion, while also hoping to take the war to Spain and make territorial gain. The war was only drawn to an official close with the Peace of Vervins in May 1598, when Spanish forces and subsidies were withdrawn. Meanwhile, Henry issued the Edict of Nantes, which offered a high degree of religious toleration for French Protestants. The military interventions in France thus ended in an ironic fashion for Philip: they had failed to oust Henry from the throne or suppress Protestantism in France and yet they had played a decisive part in helping the French Catholics gain the conversion of Henry, ensuring that Catholicism would remain France’s official and majority faith, which was of paramount importance for the devoutly Catholic Spanish king. The Gradual Decline In the century following Philip II’s reign, economic and administrative problems multiplied in Castile and Spain, revealing the weakness of the native economy. Rising inflation, financially draining wars in Europe, the ongoing aftermath of the expulsion of the Jews and Moors from Spain, and Spain’s growing dependency on the gold and silver imports combined to cause several bankruptcies that caused an economic crisis in the country, especially in heavily burdened Castile. Faced with wars against England, France, and the Netherlands, the Spanish government found that neither the New World silver nor steadily increasing taxes were enough to cover their expenses and went bankrupt again in 1596. Furthermore, the great plague of 1596 – 1602 killed 600,000 to 700,000 people, or about 10% of the population. Altogether more than 1,250,000 deaths resulted from the extreme incidence of plague in 17th century Spain. Economically, the plague destroyed the labor force, as well as created a psychological blow to an already problematic Spain. Philip II died in 1598 and was succeeded by his son Philip III (reigned 1598 – 1621). During his reign a ten-year truce with the Dutch was overshadowed by Spain’s involvement in the European-wide Thirty Years’ War in 1618. Philip III had no interest in politics or government, preferring to engage in lavish court festivities, religious indulgences, and the theater. His government resorted to a tactic that had been resolutely resisted by Philip II: paying for the budget deficits by the mass minting of increasingly worthless vellones (the currency), which caused inflation. In 1607, the government faced another bankruptcy. Philip III was succeeded in 1621 by his son Philip IV of Spain (reigned 1621 – 1665). During Philip IV’s reign much of the policy was conducted by the minister Gaspar de Guzmán, Count-Duke of Olivares. In 1640, with the war in Central Europe having no clear winner except the French, both Portugal and Catalonia rebelled. Portugal was lost to the crown for good; in Italy and most of Catalonia, French forces were expelled, and Catalonia’s independence was suppressed. Charles II (1665 – 1700), the last of the Habsburgs in Spain, was three years old when his father, Philip IV, died in 1665. The Council of Castile appointed Philip’s second wife and Charles’s mother, Mariana of Austria, regent for the minor king. As regent, Mariana managed the country’s affairs through a series of favorites (“validos”), whose merits usually amounted to no more than meeting her fancy. Spain was essentially left leaderless and was gradually reduced to a second-rank power. Inbreeding The Spanish branch of the Habsburg royal family was noted for extreme consanguinity. Well aware that they owed their power to fortunate marriages, they married between themselves to protect their gains. Charles’s father and his mother, Mariana, were actually uncle and niece. Charles was physically and mentally disabled and infertile, possibly in consequence of this routine inbreeding. Due to the deaths of his half-brothers, he was the last member of the male Spanish Habsburg line. He did not learn to speak until the age of four nor to walk until the age of eight. He was treated as virtually an infant until he was ten years old. His jaw was so badly deformed (an extreme example of the so-called Habsburg jaw) that he could barely speak or chew. Fearing the frail child would be overtaxed, his caretakers did not force Charles to attend school. The Habsburg dynasty became extinct in Spain with Charles II’s death in 1700, at which time the War of the Spanish Succession ensued; during this time, the other European powers tried to assume control over the Spanish monarchy. As a result, the management of Spain was allowed to pass to the Bourbon dynasty. Attributions Title Image Philip II of Spain - workshop of Titian, circa 1550, Public domain, via Wikimedia Commons Adapted from: https://courses.lumenlearning.com/boundless-worldhistory/chapter/spain-and-catholicism/
oercommons
2025-03-18T00:35:08.149997
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87870/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87871/overview
The Thirty Years' War Overview The Thirty Years’ War, the Holy Roman Empire, and the Protestant Reformation The Thirty Years' War was a succession of four wars between two opposing groups of various Protestant and Catholic states in the fragmented Holy Roman Empire from 1618 to 1648. The outcome of this conflict confirmed the Protestant Reformation in Europe and contributed to the progress of personal sovereignty in religious matters specifically and other kinds of individual choices in general. Learning Objectives Discuss the topic of religious conflict as a result of the Reformation: the causes and outcome of the Thirty Years War. Key Terms / Key Concepts Peace of Augsburg: a treaty between Charles V and the forces of Lutheran princes on September 25, 1555, which officially ended the religious struggle between the two groups and allowed princes in the Holy Roman Empire to choose which religion would reign in their principality Ferdinand II: His reign as Holy Roman Emperor coincided with the Thirty Years’ War and his aim, as a zealous Catholic, was to restore Catholicism as the only religion in the empire and suppress Protestantism. Bohemian Revolt: an uprising of the Bohemian estates against the rule of the Habsburg dynasty defenestration: the act of throwing someone out of a window Thirty Years' War: a series of wars in Central Europe between 1618 and 1648, growing out of the Protestant Reformation Overview The Thirty Years' War was a succession of wars in central Europe from 1618 to 1648. It was the longest and most destructive conflict in European history up to that time, resulting in millions of casualties. Initially a war between various Protestant and Catholic states in the fragmented Holy Roman Empire, it gradually developed into a more general conflict involving most of the great powers. These states employed relatively large mercenary armies, and the war became less about religion and more of a continuation of the Bourbon-Habsburg rivalry for European political pre-eminence. In the 17th century, religious beliefs and practices were a much larger influence on the average European. In that era, almost everyone took one side or the other. The war began when the newly elected Holy Roman Emperor, Ferdinand II, tried to impose religious uniformity on his domains, forcing Roman Catholicism on the Protestant states. The northern Protestant states, angered by the violation of their rights granted in the Peace of Augsburg, banded together to form the Protestant Union. Ferdinand II was a devout Roman Catholic and relatively intolerant by way of contrast with his predecessor, Rudolf II. His policies were considered stridently pro-Catholic. The Holy Roman Empire The Holy Roman Empire was a fragmented collection of largely independent states. The position of the Holy Roman Emperor was mainly titular, but the emperors from the House of Habsburg, also directly ruled a large portion of imperial territory (lands of the Archduchy of Austria and the Kingdom of Bohemia), as well as the Kingdom of Hungary. The Austrian domain was thus a major European power in its own right, ruling over some eight million subjects. Another branch of the House of Habsburg ruled over Spain and its empire, which included the Spanish Netherlands, southern Italy, the Philippines, and much of the Americas. In addition to Habsburg lands, the Holy Roman Empire contained several regional powers, such as the Duchy of Bavaria, the Electorate of Saxony, the Margraviate of Brandenburg, the Electorate of the Palatinate, Landgraviate of Hesse, the Archbishopric of Trier, and the Free Imperial City of Nuremberg. Peace of Augsburg After the Protestant Reformation, these independent states became divided between Catholic and Protestant rulership, giving rise to conflict. The Peace of Augsburg (1555), signed by Charles V, Holy Roman Emperor ended the war between German Lutherans and Catholics. The Peace established the principle Cuius regio, eius religio (“Whose realm, his religion”), which allowed Holy Roman Empire princes to select either Lutheranism or Catholicism within the domains they controlled; this ultimately reaffirmed the independence they had over their states. Subjects, citizens, or residents who did not wish to conform to a prince’s choice were given a period in which they were free to emigrate to different regions in which their desired religion had been accepted. Although the Peace of Augsburg created a temporary end to hostilities, it did not resolve the underlying religious conflict, which was made yet more complex by the spread of Calvinism throughout Germany in the years that followed. This added a third major faith to the region, but its position was not recognized in any way by the Augsburg terms, to which only Catholicism and Lutheranism were parties. Tensions Mount Religious tensions remained strong throughout the second half of the 16th century. The Peace of Augsburg began to unravel—some converted bishops refused to give up their bishoprics and certain Habsburg rulers, as well as other Catholic rulers of the Holy Roman Empire and Spain, sought to restore the power of Catholicism in the region. This was evident from the Cologne War (1583 – 1588), in which a conflict ensued when the prince-archbishop of the city—Gebhard Truchsess von Waldburg—converted to Calvinism. As he was an imperial elector, this could have produced a Protestant majority in the college that elected the Holy Roman Emperor, a position that Catholics had always held. At the beginning of the 17th century, the Rhine lands and those south to the Danube were largely Catholic, while Lutherans dominated the north and Calvinists dominated certain other areas, such as west-central Germany, Switzerland, and the Netherlands. Minorities of each creed existed almost everywhere, however. In some lordships and cities, the numbers of Calvinists, Catholics, and Lutherans were approximately equal. Much to the consternation of their Spanish ruling cousins, the Habsburg emperors who followed Charles V (especially Ferdinand I and Maximilian II, but also Rudolf II and his successor Matthias) were content to allow the princes of the empire to choose their own religious policies. These rulers avoided religious wars within the empire by allowing the different Christian faiths to spread without coercion. This angered those who sought religious uniformity. Meanwhile, Sweden and Denmark—both Lutheran kingdoms—sought to assist the Protestant cause in the Empire and wanted to gain political and economic influence there as well. By 1617, it was apparent that Matthias, Holy Roman Emperor and King of Bohemia, would die without an heir, and that his lands would go to his nearest male relative: his cousin Archduke Ferdinand II of Austria, heir-apparent and Crown Prince of Bohemia. War Breaks Out The war began when the newly elected Holy Roman Emperor Ferdinand II tried to impose religious uniformity on his domains, forcing Roman Catholicism on its peoples, which resulted in the Protestant states banding together to revolt against him. Ferdinand II, educated by the Jesuits, was a staunch Catholic who wanted to impose religious uniformity on his lands; this made him highly unpopular in Protestant Bohemia. The population’s sentiments notwithstanding, the added insult of the nobility’s rejection of Ferdinand, who had been elected Bohemian Crown Prince in 1617, triggered the Thirty Years’ War in 1618, when his representatives were thrown out of a window and seriously injured. The so-called Defenestration of Prague provoked open revolt in Bohemia, which had powerful foreign allies. Ferdinand was upset by this calculated insult, but his intolerant policies in his own lands had left him in a weak position. The Habsburg cause in the next few years would seem to suffer unrecoverable reverses. The Protestant cause seemed to wax toward a quick overall victory. The war can be divided into four major phases: The Bohemian Revolt, the Danish intervention, the Swedish intervention, and the French intervention. The Bohemian Revolt and the Thirty Years' War The Bohemian Revolt (1618 – 1620) was an uprising of the Bohemian estates against the rule of the Habsburg dynasty, in particular Emperor Ferdinand II, which triggered the Thirty Years' War. Learning Objectives Discuss the topic of religious conflict as a result of the Reformation: the causes and outcome of the Thirty Years War. Key Terms / Key Concepts Peace of Augsburg: a treaty between Charles V and the forces of Lutheran princes on September 25, 1555, which officially ended the religious struggle between the two groups and allowed princes in the Holy Roman Empire to choose which religion would reign in their principality Ferdinand II: His reign as Holy Roman Emperor coincided with the Thirty Years’ War and his aim, as a zealous Catholic, was to restore Catholicism as the only religion in the empire and suppress Protestantism. Bohemian Revolt: an uprising of the Bohemian estates against the rule of the Habsburg dynasty defenestration: the act of throwing someone out of a window Thirty Years' War: a series of wars in Central Europe between 1618 and 1648, growing out of the Protestant Reformation Background In 1555, the Peace of Augsburg had settled religious disputes in the Holy Roman Empire by enshrining the principle of Cuius regio, eius religio, allowing a prince to determine the religion of his subjects. Since 1526, the Kingdom of Bohemia had been governed by Habsburg kings who did not force their Catholic religion on their largely Protestant subjects. In 1609, Rudolf II, Holy Roman Emperor and King of Bohemia (1576 – 1612), expanded Protestant rights. He was increasingly viewed as unfit to govern, and other members of the Habsburg dynasty declared his younger brother Matthias to be family head in 1606. Upon Rudolf’s death, Matthias succeeded in the rule of Bohemia. Without heirs, Emperor Matthias sought to assure an orderly transition during his lifetime by having his dynastic heir (the fiercely Catholic Ferdinand of Styria, later Ferdinand II, Holy Roman Emperor) elected to the separate royal thrones of Bohemia and Hungary. Ferdinand was a proponent of the Catholic Counter-Reformation and not well-disposed to Protestantism or Bohemian freedoms. Some of the Protestant leaders of Bohemia feared they would be losing the religious rights granted to them by Emperor Rudolf II in his Letter of Majesty (1609). They preferred the Protestant Frederick V--Elector of the Palatinate (successor of Frederick IV, the creator of the Protestant Union). However, other Protestants supported the stance taken by the Catholics, and in 1617 Ferdinand was duly elected by the Bohemian Estates to become the Crown Prince and, automatically upon the death of Matthias, the next King of Bohemia. The Defenestration of Prague The king-elect then sent, in May 1618, two Catholic councillors (Vilem Slavata of Chlum and Jaroslav Borzita of Martinice) as his representatives to Hradčany castle in Prague. Ferdinand had wanted them to administer the government in his absence. On May 23, 1618, an assembly of Protestants seized them and threw them (and also secretary Philip Fabricius) out of the palace window, which was some sixty-nine feet off the ground. Remarkably, though injured, they survived. This event, known as the Defenestration of Prague, started the Bohemian Revolt. Soon afterward, the Bohemian conflict spread through all of the Bohemian Crown--including Bohemia, Silesia, Upper and Lower Lusatia, and Moravia. (Moravia was already embroiled in a conflict between Catholics and Protestants.) The religious conflict eventually spread across the whole continent of Europe, involving France, Sweden, and several other countries. Aftermath Immediately after the defenestration, the Protestant estates and Catholic Habsburgs started gathering allies for war. After the death of Matthias in 1619, Ferdinand II was elected Holy Roman Emperor. At the same time, the Bohemian estates deposed Ferdinand as King of Bohemia (Ferdinand remained emperor, since the titles are separate) and replaced him with Frederick V, Elector Palatine, who was a leading Calvinist and the son-in-law of the Protestant James VI and I, King of Scotland, England, and Ireland. Because they deposed a properly chosen king, the Protestants could not gather the international support they needed for war. Just two years after the Defenestration of Prague, Ferdinand and the Catholics regained power in the Battle of White Mountain on November 8, 1620. This became known as the first battle in the Thirty Years' War. This was a serious blow to Protestant ambitions in the region. As the rebellion collapsed, the widespread confiscation of property and suppression of the Bohemian nobility ensured the country would return to the Catholic side after more than two centuries of Protestant dissent. There was plundering and pillaging in Prague for weeks following the battle. Several months later, twenty-seven nobles and citizens were tortured and executed in the Old Town Square. Twelve of their heads were impaled on iron hooks and hung from the Bridge Tower as a warning. This also contributed to catalyzing the Thirty Years' War. Danish and Dutch Intervention in the Thirty Years' War After the Defenestration of Prague and the ensuing Bohemian Revolt, the Protestants warred with the Catholic League until the former were firmly defeated at the Battle of Stadtlohn in 1623. After this catastrophe, Frederick V, already in exile in The Hague, and under growing pressure from his father-in-law James I, was forced to abandon any hope of launching further campaigns. The Protestant rebellion had been crushed. Frederick was forced to sign an armistice with Holy Roman Emperor Ferdinand II, thus ending the “Palatine Phase” of the Thirty Years' War. Learning Objectives Discuss the topic of religious conflict as a result of the Reformation: the causes and outcome of the Thirty Years War. Key Terms / Key Concepts Peace of Augsburg: a treaty between Charles V and the forces of Lutheran princes on September 25, 1555, which officially ended the religious struggle between the two groups and allowed princes in the Holy Roman Empire to choose which religion would reign in their principality Ferdinand II: His reign as Holy Roman Emperor coincided with the Thirty Years’ War and his aim, as a zealous Catholic, was to restore Catholicism as the only religion in the empire and suppress Protestantism. Bohemian Revolt: an uprising of the Bohemian estates against the rule of the Habsburg dynasty defenestration: the act of throwing someone out of a window Edict of Restitution: a belated attempt by Ferdinand II to impose and restore the religious and territorial situations reached in the Peace of Augsburg (1555), passed eleven years into the Thirty Years’ War Thirty Years' War: a series of wars in Central Europe between 1618 and 1648, growing out of the Protestant Reformation Danish Intervention After the Bohemian Revolt was suppressed by Ferdinand II, Christian IV—the Danish king—led troops against Ferdinand because of his fear that recent Catholic successes threatened his sovereignty as a Protestant nation. Dutch Intervention Peace following the imperial victory at Stadtlohn proved short lived, with conflict resuming at the initiation of Denmark. Denmark had feared that the recent Catholic successes threatened its sovereignty as a Protestant nation. Danish involvement, referred to as the Low Saxon War, began when Christian IV of Denmark--a Lutheran who also ruled as Duke of Holstein, a duchy within the Holy Roman Empire--helped the Lutheran rulers of neighboring Lower Saxony by leading an army against Ferdinand II’s imperial forces in 1625. Christian IV had profited greatly from his policies in northern Germany. For instance, in 1621, Hamburg had been forced to accept Danish sovereignty. Denmark’s King Christian IV had obtained for his kingdom a level of stability and wealth that was virtually unmatched elsewhere in Europe. Denmark was funded by tolls on the Oresund and also by extensive war reparations from Sweden. Denmark’s cause was aided by France, as well as Charles I of England who agreed to help subsidize the war; Charles I’s aid was most likely the result of familial connections, as Christian was a blood uncle to both the Stuart king and his sister Elizabeth of Bohemia through their mother, Anne of Denmark. Some 13,700 Scottish soldiers under the command of General Robert Maxwell, 1st Earl of Nithsdale were sent as allies to help Christian IV. Moreover, some 6,000 English troops under Charles Morgan also eventually arrived to bolster the defense of Denmark, though it took longer for them to arrive than Christian had hoped, due partially to the ongoing British campaigns against France and Spain. Thus, Christian, as war-leader of the Lower Saxon Circle, entered the war with an army of only 20,000 mercenaries, some of his allies from England and Scotland, and a national army 15,000 strong, leading them as Duke of Holstein rather than as King of Denmark. War Ensues To fight Christian, Ferdinand II employed the military help of Albrecht von Wallenstein, a Bohemian nobleman who had made himself rich from the confiscated estates of his Protestant countrymen. Wallenstein pledged his army, which numbered between 30,000 and 100,000 soldiers, to Ferdinand II in return for the right to plunder the captured territories. Christian, who knew nothing of Wallenstein’s forces when he invaded, was forced to retire before the combined forces of Wallenstein and Tilly. Christian’s mishaps continued when all of his allies were forced aside: France was in the midst of a civil war, Sweden was at war with the Polish–Lithuanian Commonwealth, and neither Brandenburg nor Saxony was interested in changes to the tenuous peace in eastern Germany. Moreover, neither of the substantial English contingents arrived in time to prevent Wallenstein’s defeat of Mansfeld’s army at the Battle of Dessau Bridge (1626) or Tilly’s victory at the Battle of Lutter (1626). Wallenstein’s army marched north, occupying Mecklenburg, Pomerania, and Jutland itself, but it proved unable to take the Danish capital, Copenhagen, on the island of Zealand. Wallenstein lacked a fleet, and neither the Hanseatic ports nor the Poles would allow the building of an imperial fleet on the Baltic coast. He then laid siege to Stralsund, the only belligerent Baltic port with sufficient facilities to build a large fleet. It soon became clear, however, that the cost of continuing the war would far outweigh any gains from conquering the rest of Denmark. Wallenstein feared losing his northern German gains to a Danish-Swedish alliance, while Christian IV had suffered another defeat in the Battle of Wolgast (1628); both were ready to negotiate. Negotiations and the Edict of Restitution Negotiations concluded with the Treaty of Lübeck in 1629, which stated that Christian IV could retain control over Denmark (including the duchies of Sleswick and Holstein) if he would abandon his support for the Protestant German states. Thus, in the following two years, the Catholic powers subjugated more land. At this point, the Catholic League persuaded Ferdinand II to take back the Lutheran holdings that were, according to the Peace of Augsburg, rightfully the possession of the Catholic Church. Enumerated in the Edict of Restitution (1629), these possessions included two archbishoprics, sixteen bishoprics, and hundreds of monasteries. In the same year, Gabriel Bethlen, the Calvinist prince of Transylvania, died. Only the port of Stralsund continued to hold out against Wallenstein and the emperor, having been bolstered by Scottish “volunteers” who arrived from the Swedish army to support their countrymen already there in the service of Denmark. Swedish Intervention in the Thirty Years' War The Swedish intervention in the Thirty Years' War, which took place between 1630 and 1635, was a major turning point of the war, and is often considered to be an independent conflict. After several attempts by the Holy Roman Empire to prevent the spread of Protestantism in Europe, King Gustav II Adolf of Sweden ordered a full-scale invasion of the Catholic states. Although he was killed in action, his armies successfully defeated their enemies and gave birth to the Swedish Empire after proving their ability in combat. The new European power would last for a hundred years before being overwhelmed by numerous enemies in the Great Northern War. Learning Objectives Discuss the topic of religious conflict as a result of the Reformation: the causes and outcome of the Thirty Years War. Key Terms / Key Concepts Ferdinand II: His reign as Holy Roman Emperor coincided with the Thirty Years’ War and his aim, as a zealous Catholic, was to restore Catholicism as the only religion in the empire and suppress Protestantism. Edict of Restitution: a belated attempt by Ferdinand II to impose and restore the religious and territorial situations reached in the Peace of Augsburg (1555), passed eleven years into the Thirty Years’ War Thirty Years' War: a series of wars in Central Europe between 1618 and 1648, growing out of the Protestant Reformation Gustavus Adolphus: King of Sweden from 1611 to 1632, also known as Gustav II Adolf, he led Sweden's emergence as major European power and Swedish forces during the Swedish intervention in the Thirty Years' War from 1630 into 1632 Pomerania - area between Germany and Poland on the southern coast of the Baltic Sea ravaged and depopulated during the Thirty Years War Peace of Prague - 1635 peace ending Saxony’s participation in the Thirty Years’ War, leading to the withdrawal of other German powers, and leaving the War largely to foreign powers Swedish Intervention The Swedish intervention in the Thirty Years' War began when King Gustav II Adolf of Sweden ordered a full-scale invasion of the Catholic states; it was a major turning point of the war. Background The king of Sweden, Gustav II Adolf, had been well informed of the war between the Catholics and Protestants in the Holy Roman Empire for some time, but his hands were tied because of the constant enmity of Poland. The Polish royal family, the primary branch of the House of Vasa, had once claimed the throne of Sweden. Lutheranism was the primary religion of Sweden and had by then established a firm grip on the country. Notably, one of the reasons that Sweden had so readily embraced Lutheranism was because converting to it allowed the crown to seize all the lands in Sweden that were possessed by the Roman Catholic Church. As a result of this seizure and the money that the crown gained, the crown was greatly empowered. Gustav was concerned about the growing power of the Holy Roman Empire, and like Christian IV before him, was heavily subsidized by Cardinal Richelieu—the chief minister of Louis XIII of France, as well as by the Dutch. Sweden’s Army During this time, and while Sweden was under a truce with Poland, Gustav established a military system that was to become the envy of Europe. He drew up a new military code. And the new improvements to Sweden’s military order even pervaded the state by fueling fundamental changes in the economy. The improvements included tight discipline and meritorious service. Soldiers who had displayed courage and distinguished themselves in the line of duty were paid generously, in addition to being given pensions. The corps of engineers were the most modern of their age, and in the campaigns in Germany the population repeatedly expressed surprise at the extensive nature of the entrenchment and the elaborate nature of the equipment. The military reforms brought the Swedish military to the highest levels of readiness and were to become the standard that European states would strive for. Swedish Intervention From 1630 to 1634, Swedish-led armies drove the Catholic forces back, regaining much of the lost Protestant territory. Swedish forces entered the Holy Roman Empire via the Duchy of Pomerania, which had served as the Swedish bridgehead since the Treaty of Stettin (1630). After dismissing Wallenstein in 1630, from fear he was planning a revolt, Ferdinand II became dependent on the Catholic League. And Gustavus Adolphus allied with France and Bavaria. At the Battle of Breitenfeld (1631), Gustavus Adolphus’s forces defeated the Catholic League led by Tilly. A year later, they met again in another Protestant victory, this time accompanied by the death of Tilly. The upper hand had now switched from the Catholic League to the Protestant Union, led by Sweden. With Tilly dead, Ferdinand II returned to the aid of Wallenstein and his large army. Wallenstein marched to the south, threatening Gustavus Adolphus’s supply chain. Gustavus Adolphus knew that Wallenstein was waiting for the attack and was prepared but found no other option. Wallenstein and Gustavus Adolphus clashed in the Battle of Lützen (1632), where the Swedes prevailed, but Gustavus Adolphus was killed. Ferdinand II’s suspicion of Wallenstein resumed in 1633, when Wallenstein attempted to arbitrate the differences between the Catholic and Protestant sides. Ferdinand II arranged for Wallenstein’s arrest after removing him from command, probably due to a fear that he would switch sides. One of Wallenstein’s soldiers, Captain Devereux, killed him when he attempted to contact the Swedes in the town hall of Eger (Cheb) on February 25, 1634. The same year, the Protestant forces, lacking Gustav’s leadership, were smashed at the First Battle of Nördlingen by the Spanish-Imperial forces commanded by Cardinal-Infante Ferdinand. During the campaign, Sweden managed to conquer half of the imperial kingdoms, making it the continental leader of Protestantism until the Swedish Empire ended in 1721. Peace of Prague By the spring of 1635, all Swedish resistance in the south of Germany had ended. After that, the imperialist and the Protestant German sides met for negotiations, producing the Peace of Prague(1635); this treaty entailed a delay in the enforcement of the Edict of Restitution for forty years and allowed Protestant rulers to retain secularized bishoprics held by them in 1627. This protected the Lutheran rulers of northeastern Germany, but not those of the south and west. Initially after the Peace of Prague, the Swedish armies were pushed north into Germany by the reinforced imperial army. The treaty also provided for the union of the emperor’s army and the armies of the German states into a single army of the Holy Roman Empire. Finally, German princes were forbidden from establishing alliances amongst themselves or with foreign powers, and amnesty was granted to any ruler who had taken up arms against the emperor after the arrival of the Swedes in 1630. This treaty failed to satisfy France, however, because of the renewed strength it granted the Habsburgs. France then entered the conflict, beginning the final period of the Thirty Years’ War. Sweden did not take part in the Peace of Prague, and it joined with France in continuing the war. French Intervention and the Conclusion of the Thirty Years' War No longer able to tolerate the encirclement of two major Habsburg powers on its borders, Catholic France entered the Thirty Years’ War on the side of the Protestants to counter the Habsburgs and bring the war to an end. Learning Objectives Discuss the topic of religious conflict as a result of the Reformation: the causes and outcome of the Thirty Years War. Key Terms / Key Concepts Thirty Years' War: a series of wars in Central Europe between 1618 and 1648, growing out of the Protestant Reformation Gustavus Adolphus: King of Sweden from 1611 to 1632, also known as Gustav II Adolf, he led Sweden's emergence as major European power and Swedish forces during the Swedish intervention in the Thirty Years' War from 1630 into 1632 defenestration - the act of throwing someone out of a window Peace of Prague - 1635 peace ending Saxony’s participation in the Thirty Years’ War, leading to the withdrawal of other German powers, and leaving the War largely to foreign powers Peace of Westphalia - a collection of peace treaties that ended the Thirty Years’ War France’s Opposition to the Holy Roman Empire France, though Roman Catholic, was a rival of the Holy Roman Empire and Spain. Cardinal Richelieu, the chief minister of King Louis XIII of France, considered the Habsburgs too powerful because they held a number of territories on France’s eastern border, including portions of the Netherlands. Richelieu had already begun intervening indirectly in the war in January 1631, when the French diplomat Hercule de Charnacé signed the Treaty of Bärwalde with Gustavus Adolphus—by which France agreed to support the Swedes with 1,000,000 livres each year in return for a Swedish promise to maintain an army in Germany against the Habsburgs. The treaty also stipulated that Sweden would not conclude a peace with the Holy Roman Emperor without first receiving France’s approval. France Enters the War By 1635 Sweden’s ability to continue the war alone appeared doubtful. In September 1634 Swedish forces suffered a rout at Nordlingen. The following year Protestant German princes sued for peace with the German emperor, resulting in the Peace of Prague. Richelieu made the decision to enter the war against the Habsburgs. France declared war on Spain in May 1635, and on the Holy Roman Empire in August 1636, opening offensives against the Habsburgs in Germany and the Low Countries. France then aligned its strategy with the allied Swedes in Wismar (1636) and Hamburg (1638). Early French military efforts were met with disaster, and the Spanish counter-attacked, invading French territory. The imperial general Johann von Werth and Spanish commander Cardinal-Infante Ferdinand ravaged the French provinces of Champagne, Burgundy, and Picardy, and even threatened Paris in 1636. Then, the tide began to turn for the French. The Spanish army was repulsed by Bernhard of Saxe-Weimar. Bernhard’s victory in the Battle of Compiègne pushed the Habsburg armies back towards the borders of France. Widespread fighting ensued until 1640, with neither side gaining an advantage. However, the war reached a climax and the tide of the war turned clearly toward France and against Spain in 1640, starting with the siege and capture of the fort at Arras. The French conquered Arras following a siege that lasted from June 16 to August 9, 1640. The fall of Arras paved the way for the French to take all of Flanders. The ensuing French campaign against Spanish forces in Flanders culminating in a decisive French victory at Rocroi in May 1643. Continued Swedish War Efforts After the Peace of Prague the Swedes reorganized the Royal Army under Johan Banér and created a new one—the Army of the Weser, under the command of Alexander Leslie. The two army groups moved south in the spring of 1636, re-establishing alliances on the way, including a revitalized one with Wilhelm of Hesse-Kassel. They then combined and confronted the imperialists at the Battle of Wittstock. Despite the odds being stacked against them, the Swedish army won. This success largely reversed many of the effects of their defeat at Nördlingen, albeit not without creating some tensions between Banér and Leslie. After the battle of Wittstock, the Swedish army regained the initiative in the German campaign. In 1642, outside Leipzig, the Swedish Field Marshal Lennart Torstenson defeated an army of the Holy Roman Empire led by Archduke Leopold Wilhelm of Austria, and his deputy Prince-General Ottavio Piccolomini, Duke of Amalfi. The imperial army suffered 20,000 casualties. In addition, the Swedish army took 5,000 prisoners and seized forty-six guns, at a cost to themselves of 4,000 killed or wounded. The battle enabled Sweden to occupy Saxony and impressed on Ferdinand III the need to include Sweden, and not just France, in any peace negotiations. Final Battles Over the next four years, fighting continued, but all sides began to prepare for ending the war. In 1648, the Swedes (commanded by Marshal Carl Gustaf Wrangel) and the French (led by Turenne and Condé) defeated the imperial army at the Battle of Zusmarshausen, as well as the Spanish at Lens. However, an imperial army led by Octavio Piccolomini managed to check the Franco-Swedish army in Bavaria, though their position remained fragile. The Battle of Prague in 1648 became the last action of the Thirty Years' War. The general Hans Christoff von Königsmarck, commanding Sweden’s flying column, entered the city and captured Prague Castle (where the event that triggered the war—the Defenestration of Prague—had taken place thirty years before). There, they captured many valuable treasures, including the Codex Gigas, which contains the Vulgate Bible, as well as many historical documents all written in Latin; these are still preserved in Stockholm as the largest extant medieval manuscript in the world. However, they failed to conquer the right-bank part of Prague and the old city, which resisted until the end of the war. These results left only the imperial territories of Austria safely in Habsburg hands. The Thirty Years' War officially ended with the Peace of Westphalia, a series of peace treaties among the belligerents. Attributions Licenses and Attributions CC LICENSED CONTENT, SHARED PREVIOUSLY - Curation and Revision. Provided by: Boundless.com. License: CC BY-SA: Attribution-ShareAlike - Title Image: 1618 Defenestration of Prague. Attribution: Matthäus Merian, Public domain, via Wikimedia Commons. Provided by: Wikipedia. Location: https://commons.wikimedia.org/wiki/File:Prager.Fenstersturz.1618.jpg. License: CC BY-SA: Attribution-ShareAlike.
oercommons
2025-03-18T00:35:08.193979
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87871/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87872/overview
The Peace of Westphalia Overview The Peace of Westphalia The Peace of Westphalia was a series of peace treaties signed between May and October 1648. It was signed by warring parties in the Westphalian cities of Osnabrück and Münster. The collection of treaties ended the Thirty Years’ War. Learning Objective - Evaluate the impact of the Treaty of Westphalia on Europe. Key Terms / Key Concepts Peace of Westphalia: a collection of peace treaties that ended the Thirty Years’ War The Peace of Westphalia Over a four-year period, the warring nations of the Thirty Years’ War (the Holy Roman Empire, France, and Sweden) were actively negotiating at Osnabrück and Münster in Westphalia (present-day northwest Germany). The peace negotiations involved a total of 109 delegations representing European powers, including Holy Roman Emperor Ferdinand III, Philip IV of Spain, the Kingdom of France, the Swedish Empire, the Dutch Republic, the princes of the Holy Roman Empire, and sovereigns of the free imperial cities. The end of the war was not brought about by one treaty, but instead by a group of treaties, collectively named the Peace of Westphalia. The three treaties involved were the Peace of Münster (between the Dutch Republic and the Kingdom of Spain), the Treaty of Münster (between the Holy Roman Emperor and France and their respective allies), and the Treaty of Osnabrück (between the Holy Roman Empire and Sweden and their respective allies). These treaties ended both the Thirty Years’ War (1618 – 1648) in the Holy Roman Empire and the Eighty Years’ War (1568 – 1648) between Spain and the Dutch Republic, with Spain formally recognizing the independence of the Dutch Republic. Terms of the Treaties Along with ending open warfare between the belligerents, the Peace of Westphalia established several important tenets and agreements, including: - All parties would recognize the Peace of Augsburg of 1555, in which each prince would have the right to determine the religion of his own state. This affirmed the principle of cuius regio, eius religio (Whose realm, his religion). And the options at the time were Catholicism, Lutheranism, and Calvinism. - Christians living in principalities where their denomination was not the established church were guaranteed the right to practice their faith in public during allotted hours and in private at their will. There were also several territorial adjustments brought about by the peace settlements. The independence of Switzerland from the empire was formally recognized. Sweden received Western Pomerania, Wismar, and the Prince-Bishoprics of Bremen and Verden as hereditary fiefs, thus gaining a seat and vote in the Imperial Diet of the Holy Roman Empire. Barriers to trade and commerce erected during the war were also abolished, and a degree of free navigation was guaranteed on the Rhine. And France came out of the war in a far better position than any of the other participants. France retained the control of the Bishoprics of Metz, Toul, and Verdun near Lorraine, received the cities of the Décapole in Alsace and the city of Pignerol near the Spanish Duchy of Milan. The Impact of the Peace of Westphalia The Peace of Westphalia did not entirely end conflicts arising out of the Thirty Years’ War. Fighting continued between France and Spain until the Treaty of the Pyrenees in 1659. Nevertheless, it did settle many outstanding European issues of the time. Some of the principles developed at Westphalia, especially those relating to respecting the boundaries of sovereign states and non-interference in their domestic affairs, became central to the world order that developed over the following centuries, and remain in effect today. Many of the imperial territories established in the Peace of Westphalia later became the sovereign nation-states of modern Europe. The Peace of Westphalia established the precedent of having peace treaties negotiated and created by a diplomatic congress, as well as a new system of political order in central Europe based upon the concept of co-existing sovereign states. Inter-state aggression was to be held in check by a balance of power. A norm was established against interference in another state’s domestic affairs. As European influence spread across the globe, these Westphalian principles, especially the concept of sovereign states, became central to international law and to the prevailing world order. Attributions Images courtesy of Wikimedia Commons. Boundless World History "The Thirty Years War" https://courses.lumenlearning.com/boundless-worldhistory/chapter/the-thirty-years-war/
oercommons
2025-03-18T00:35:08.215411
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87872/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87873/overview
Absolutist France Overview Absolutist France Absolutism is a period in French history during the seventeenth and eighteenth centuries in which the power of the monarch was theoretically unrestrained and unlimited. It resulted in exuberant wealth for the French monarchs, such as Louis XIV. It is also characterized by extreme poverty for most of the French population and exorbitant taxation, as well as political, social, and religious repression. Learning Objectives - Investigate the absolutist French state under Louis XIV. - Identify key people, events of Absolutist France. Key Terms / Key Concepts Cardinal: a senior member of the Roman Catholic church who often acts as an advisor to the Pope; second only to the Pope in their power Cardinal Armand de Richelieu: seventeenth-century French cardinal who wielded much power in shaping internal and external French policies King Louis XVI (The Sun King): authoritarian, supremely wealthy monarch of France who represents the height of absolutism Absolutism: practice in which European monarchs had unlimited power Versailles: luxurious palace of King Louis XIV outside of Paris The Fronde: series of French Civil Wars that erupted in the mid-seventeenth century in response to oppressive, absolutist rule under the French monarchies Huguenot: a religious minority group of French Protestants Absolutist France The most exemplary case of absolutist government was that of France in the seventeenth century. The transformation of the French state from a conventional Renaissance-era monarchy to an absolute monarchy began under the reign of Louis XIII, the son of Henry IV. Louis XIII came to the throne as an eight-year-old when his father was assassinated in 1610. Following conventional practice when a king was too young to rule, his mother Marie de Medici held power as regent—one who rules in the name of the king; she also enlisted the help of a brilliant French cardinal, Armand de Richelieu. While Marie de Medici eventually stepped down as regent, Richelieu joined the king as his chief minister in 1628 and continued to play the key role in shaping the French state. Portrait of Cardinal Armand de Richelieu. Richelieu deserves a great deal of credit for laying the foundation for absolutism in France. He suppressed various revolts against royal power that were led by nobles, and he created a system of royal officials called Intendants. These officials were usually men drawn from the mercantile classes. They collected royal taxes and oversaw administration and military recruitment in the regions to which they were assigned; they did not have to answer to local lords. Richelieu’s major focus was improving tax collection. To do so, he abolished three out of six regional assemblies that, traditionally, had the right to approve changes in taxation. He made himself superintendent of commerce and navigation, recognizing the growing importance of commerce in providing royal revenue. He managed to increase the revenue from the direct tax on land, almost threefold during his tenure (r. 1628 – 1642). That said, while he did curtail the power of the elite nobles, most of those who bore the brunt of his improved techniques of taxation were the peasants; Richelieu compared the peasants to mules, noting that they were only useful for working. Richelieu was also a cardinal: one of the highest-ranking “princes of the church,” officially beholden only to the pope. His real focus, however, was the French crown. It was said that he “worshiped the state” much more than he appeared to concern himself with his duties as a cardinal. He even oversaw French support of the Protestant forces in the Thirty Years’ War as a check against the power of the Habsburgs, as well as supported the Ottoman Turks against the Habsburgs for the same reason. Just to underline this point: a Catholic cardinal, Richelieu, supported Protestants and Muslims against a Catholic monarchy in the name of French power. King Louis XIV: The Sun King Louis XIII died in 1643, and his son became King Louis XIV. The latter was still too young to take the throne, so his mother became regent, ruling alongside Richelieu’s protégé, Jules Mazarin. Mazarin continued Richelieu’s policies and focus on taxation and royal centralization. Almost immediately, however, simmering resentment against the growing power of the king exploded in a series of uprisings against the crown known as The Fronde, a noble-led civil war against the monarchy. They were defeated by loyal forces in 1653, but the uprisings made a profound impression on the young king, who vowed to bring the nobles into line. When Mazarin died in 1661, Louis ascended to full power at just twenty-three-years-old. Louis went on to a long and dazzling rule, achieving the height of royal power and prestige, not just in France but in all of Europe. He ruled from 1643 – 1715 (including the years in which he ruled under the guidance of a regent), which means he was king for an astonishing 54 years at a time when the average life expectancy for those surviving infancy was only about 40 years. Louis was depicted as the sun god Apollo and called the “Sun King”, a term and an image he actively cultivated while declaring himself “without equal”. He was, among other things, a master marketer and propagandist of himself and his own authority. He had teams of artists, playwrights, and architects build statues, paint pictures, write plays and stories, and build buildings all glorifying his image. Louis' Versailles Palace and Court Culture Famously, Louis developed what had begun as a hunting lodge (first built by his father) into the most glorious palace in Europe; this palace was constructed in the village of Versailles, about 15 miles southeast of Paris, and built in the baroque style and lavishly decorated. Over the decades of his long rule, the structure and grounds of the Palace of Versailles grew into the largest and most spectacular seat of royal power in Europe, on par with any palace in the world at the time. There were 1,400 fountains in the gardens, 1,200 orange trees, and an ongoing series of operas, plays, balls, and parties. Since Louis ultimately had 2,000 rooms built both in the palace and in apartments in the village,10,000 people could live in the palace and its additional buildings, which were all furnished at the state’s expense. Today, the grounds cover about 2,000 acres, or just over 3 square miles (by comparison, Central Park in New York City is a mere 843 acres in size). Louis expected high-ranking nobles to spend part of the year at Versailles, where they were lodged in apartments and spent their days bickering, gossiping, gambling, and taking part in elaborate rituals surrounding the person of the king. Each morning, high-ranking nobles greeted the king as he awoke (the “rising” of the king, in parallel to the rising of the sun), hand-picked favorites carried out such tasks as tying the ribbons on his shoes, and then the procession accompanied him to breakfast. Comparable rituals continued throughout the day, ensuring that only those nobles in the king’s favor ever had the opportunity to speak to him directly. The rituals were carefully staged not only to represent deference to Louis but also to emphasize the hierarchy of ranks among the nobles themselves, undermining their unity and forcing them to squabble over his favor. Around the king’s person, courtiers had to be very careful to wear the right clothes, make the right gestures, use the correct phrases, and even display the correct facial expressions. Deviation could, and generally did, lead to humiliation and sometimes permanent loss of the king’s favor, to the delighted mockery of the other nobles. This was not just an elaborate game; anyone wishing to "get" anything from the royal government had to convince the king and his officials that he was witty, poised, fashionable, and respected within the court. One false move and a career could be ruined. At the same time, the rituals surrounding the king were not invented to humiliate and impoverish his nobles. Instead, they celebrated each noble’s power in terms of his or her proximity to the king. Nobles at Versailles were reminded of two things at once: their dependence and deference to the king and their own dignity and power as those who had the right to be near the king. Not just nobles participated in the dizzying web of favor-trading, gossip, and bribery at Versailles, however. Louis XIV prided himself on the “openness” of his court, contrasting it with the closed-off court of a tyrant. Any well-dressed person was welcome to walk through the palace and the grounds and confer with those present. Both men and women from very humble origins sometimes rose to prominence at Versailles and made a healthy living by serving as go-betweens for elites seeking royal positions through the bureaucracy. Others took advantage of the state’s desperate need for revenue by proposing new tax schemes; those that were accepted usually came with a payment for the person who submitted the scheme, so it was possible to make a living by “brainstorming” for tax revenue on behalf of the monarchy. The palace had been designed for display, not comfort. As a result, some aspects of life at Versailles seem comic today: the palace is so huge that the food was usually cold before it made it from the kitchens to the dining room; on one occasion Louis’ wine froze en route. Some of the nobles who lived in the palace or its grounds would use the hallways to relieve themselves instead of the privies because the latter were so inadequate and far from their rooms. The costs of building and maintaining such an enormous temple to monarchical power were immense. During the height of its construction, 60% of the royal revenue went to funding the elaborate court at Versailles itself (this later dropped to 5% under Louis XVI, but the old figure was well-remembered and resented); this was an enormous ongoing expenditure that nevertheless shored up royal prestige. Louis himself delighted in the life at court, refusing to return to Paris (which he hated) and dismissing the financial costs as beneath his dignity to take notice of. Louis' Domestic Achievements Louis did not just preside over the ongoing pageant at Versailles. He was dedicated to glorifying French achievements in art and scholarship, as well as to his personal obsession: warfare. He created important theater companies, founded France’s first scientific academy, and supported the Académie Française—the body dedicated to preserving the purity of the French language founded earlier by Richelieu (during Louis XIV’s reign, the Academy published the first official French dictionary). French literature, art, and science all prospered under his sponsorship, and French became the language of international diplomacy among European states. To keep up with costs, Louis continued to entrust revenue collection to non-noble bureaucrats. The most important was Jean Baptiste Colbert, who doubled royal revenues by reducing the cut taken by tax collectors. Colbert also increased tariffs on foreign trade going to France and greatly expanded France’s overseas commercial interests. Louis' Religious Intolerance While Louis’s primary legacy was the image of monarchy that he created, his practical policies were largely destructive to France itself. First, he relentlessly persecuted religious minorities, going after various small groups of religious dissenters but concentrating most of his attention and ire on the Huguenots. In 1685 he officially revoked the Edict of Nantes that his grandfather had created to grant the Huguenots toleration, and he offered them the choice of conversion to Catholicism or exile. While many did convert, over 200,000 fled to parts of Germany, the Netherlands, England, and America. In one fell swoop, Louis crippled what had been among the most commercially productive sectors of the French population, ultimately strengthening his various enemies in the process. An Empire at War Louis waged constant war. From 1680 – 1715 Louis launched a series of wars, primarily against his Habsburg rivals, which succeeded in seizing small chunks of territory on France’s borders; this seizing of various Habsburg lands saddled the monarchy with enormous debts. Colbert, the architect of the vastly more efficient systems of taxation, repeatedly warned Louis that these wars were financially destructive; however, Louis simply ignored the question of whether he had enough money to wage them. The threat of France was so great that even traditional enemies like England and the Netherlands on one hand and the Habsburgs on the other joined forces against Louis; after a lengthy war, the Treaty of Utrecht in 1713 forced Louis to abandon further territorial ambitions. Furthermore, the costs of the wars were so high that his government desperately sought new sources of revenue, selling noble titles and bureaucratic offices, instituting still new taxes, and further trampling the peasants. When he died in 1715, France was bankrupt. Primary Source: Jean Dormat- "On Social Order and Absolute Monarchy" Jean Domat (1625-1696), “On Social Order and Absolute Monarchy” [Abridged] There is no one who is not convinced of the importance of good order in the state and who does not sincerely wish to see that state well ordered in which he has to live. For everyone understands, and feels in himself by experience and by reason, that this order concerns and touches him in a number of ways .... Everyone knows that human society forms a body of which each person is a member; and this truth, which Scripture teaches us and which the light of reason makes plain, is the foundation of all the duties that relate to the conduct of each person toward others and toward the body as a whole. For these sorts of duties are nothing else but the functions appropriate to the place each person holds according to his rank in society. It is in this principle that we must seek the origin of the rules that determine the duties, both of those who govern and of those who are subject to government. For it is through the place God has assigned each person in the body of society, that He, by calling him to it, prescribes all his functions and duties. And just as He commands everyone to obey faithfully the precepts of His law that make up the duties of all people in general, so He prescribes for each one in particular the duties proper to his condition and status, according to his rank in the body of which he is a member. This includes the functions and duties of each member with respect to other individuals and with respect to the body as a whole. [Necessity and the Origin of Government] Because all men are equal by nature, that is to say, by their basic humanity, nature does not make anyone subject to others .... But within this natural equality, people are differentiated by factors that make their status unequal, and forge between them relationships and dependencies that determine the various duties of each toward the others, and make government necessary .... The first distinction that subjects people to others is the one created by birth between parents and children. And this distinction leads to a first kind of government in families, where children owe obedience to their parents, who head the family. The second distinction among persons arises from the diversity of employments required by society, and which unite them all into a body of which each is a member. For just as God has made each person depend on the help of others for various needs, He has differentiated their status and their employments for the sake of all these needs, assigning to people the place in which they should function. And it is through these interdependent employments and conditions that the ties binding human society are formed, as well as the ties among its individual members. This also makes it necessary to have a head to unite and rule the body of the society created by these various employments, and to maintain the order of the relationships that give the public the benefit of the different functions corresponding to each person's station in life. *** It is a further consequence of these principles that, since all people do not do their duty and some, on the contrary, commit injustices, for the sake of keeping order in society, injustices and all enterprises against this order must be repressed: which was possible only through authority given to some over others, and which made government necessary. This necessity of government over people equal by their nature, distinguished from each other only by the differences that God established among them according to their stations and professions, makes it clear that government arises from His will; and because only He is the natural sovereign of men, it is from Him that all those who govern derive their power and all their authority, and it is God Himself Whom they represent in their functions. [The Duties of the Governed] Since government is necessary for the public good, and God Himself has established it, it is consequently also necessary for those who are subject to government, to be submissive and obedient. For otherwise they would resist God Himself, and government, which should be the bond of peace and unity that brings about the public good, would become an occasion for divisions and disturbances that would cause its downfall. The first duty of obedience to government is the duty to obey those who hold the first place in it, monarchs or others who are the heads of the body that makes up society, and to obey them as the limbs of the human body obey the head to which they are united. This obedience to him who governs should be considered as obedience to the power of God Himself, Who has instituted [the prince] as His lieutenant .... *** Obedience to government includes the duties of keeping the laws, not undertaking anything contrary to them, performing what is ordered, abstaining from what is forbidden, shouldering public burdens, whether offices or taxes; and in general everyone is obliged not only not to contravene public order in any way, but to contribute to it [positively) according to his circumstances. Since this obedience is necessary to maintain the order and peace that should unite the head and members composing the body of the state, it constitutes a universal duty for all subjects in all cases to obey the orders of the prince, without taking the liberty of passing judgment on the orders they should obey. For otherwise, the right to inquire what is just or not would make everyone a master, and this liberty would encourage seditions. Thus each individual owes obedience to the laws themselves and [even] to unjust orders, provided he can obey and follow them without injustice on his own part. And the only exception that can qualify this obedience is limited to cases in which one could not obey without disobeying the divine law. [The Power, Rights, and Duties of Sovereigns] The sovereign power of government should be proportionate to its mandate, and in the station he occupies in the body of human society that makes up the state, he who is the head should hold the place of God. For since God is the only natural sovereign of men, their judge, their lawgiver, their king, no man can have lawful authority over others unless he holds it from the hand of God .... The power of sovereigns being thus derived from the authority of God, it acts as the arm and force of the justice that should be the soul of government; and that justice alone has the natural claim to rule the minds and hearts of men, for it is over these two faculties of men that justice should reign. *** According to these principles, which are the natural foundations of the authority of those who govern, their power must have two essential attributes: one, to make that justice rule from which their power is entirely derived, and the other, to be as absolute as the rule of that justice itself, which is to say, the rule of God Himself Who is justice and Who wishes to reign through [princes] as He wishes them to reign through Him. For this reason Scripture gives the name of gods to those to whom God has entrusted the right of judging, which is the first and most essential of all the functions of government.... Since the power of princes thus comes to them from God, and since He gives it to them only as an instrument of His providence and His rule over the states whose government He delegates to them, it is clear that they should use this power in accordance with the aims that divine providence and rule have established for them; and that the material and visible manifestations of their authority should reflect the operation of the will of God.... [The will of GodJ Whose rule they ought to make visible through their power, should be the governing principle for the way they use that power, since their power is the instrument [of the divine will] and is entrusted to them only for that purpose. This, without a doubt, is the foundation and first principle of all the duties of sovereigns, namely to let God Himself rule; that is, to govern according to His will which is nothing other than justice. Thus it is the rule of justice which should be the glory [of the rule] of princes. *** Among the rights of the sovereign, the first is the right to administer justice, the foundation of public order, whether he exercises it himself as occasions arise or whether he lets it be exercised by others whom he delegates for the purpose .... *** This same right to enforce the laws, and to maintain order in general by the administration of justice and the deployment of sovereign power, gives the prince the right to use his authority to enforce the laws of the Church, whose protector, conservator, and defender [sic] he should be; so that by the aid of his authority, religion rules all his subjects.... *** Among the rights that the laws give the sovereign should be included [the right] to display all the signs of grandeur and majesty necessary to make manifest the authority and dignity of such wide-ranging and lofty power, and to impress veneration for it upon the minds of all subjects. For although they should see in it the power of God Who has established it and should revere it apart from any visible signs of grandeur, nevertheless since God accompanies His own power with visible splendor on earth and in the heavens as in a throne and a palace... He permits that the power He shares with sovereigns be proportionately enhanced by them in ways suitable for arousing respect in the people. This can only be done by the splendor that radiates from the magnificence of their palaces and the other visible signs of grandeur that surround them, and whose use He Himself has given to the princes who have ruled according to His spirit. *** The first and most essential of all the duties of those whom God raises to sovereign government is to acknowledge this truth: that it is from God that they hold all their power [sic], that it is His place they take, that it is through Him they should reign, and that it is to Him they should look for the knowledge and wisdom needed to master the art of governing. And it is these truths they should make the principle of all their conduct and the foundation of all their duties. *** The first result of these principles is that sovereigns should know what God requires of them in their station and how they should use the power He has given them. And it is from Him they should learn it, by reading His law, whose study He has explicitly prescribed for them, including what they should know in order to govern well. *** These general obligations ... encompass all the specific duties of those who hold sovereign power. For [these obligations] cover everything that concerns the administration of justice, the general policing of the state, public order, the repose of subjects, peace of mind in families, vigilance over everything that can contribute to the common good, the choice of able ministers who love justice and truth [sic], the appointment of good men to the dignities and offices that the sovereign himself needs to fill with persons known to him, the observance of regulations for filling other offices with people not subject to his personal choice, discretion in the use of severity or mercy in those cases where the rigor of justice may be tempered, a wise distribution of benefices, rewards, exemptions, privileges, and other favors; good administration of the public finances, prudence in conducting relations with foreign states, and lastly everything that can make government pleasing to good people, terrible to the wicked, and worthy in all respects of the divine mandate to govern men, and of the use of a power which, coming only from God, shares in His own Authority. *** We may add as a last duty of the sovereign, which follows from the first and includes all the others, that although his power seems to place him above the law, no one having the right to call him to account, nevertheless he should observe the laws as they may apply to him. And he is obliged to do this not only in order to set a good example to his subjects and make them love their duty, but because his sovereign power does not exempt him from his own duty, and his station requires him to prefer the general good of the state to his personal interests, and it is a glory for him to look upon the general good as his own. From Modern History Sourcebook, Fordham University Attributions Images courtesy of Wikimedia Commons. Boundless World History "France and Authoritarianism" https://courses.lumenlearning.com/boundless-worldhistory/chapter/france-and-authoritarianism/ "On Social Order and Absolute Monarchy." Jean Dormat. Fordham University. https://sourcebooks.fordham.edu/mod/1687domat.asp
oercommons
2025-03-18T00:35:08.247447
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87873/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87874/overview
Hobbseian Philosophy Overview Hobbseian Philosophy Thomas Hobbes, an English philosopher and scientist, was one of the key figures in the political debates of the Enlightenment period. He introduced a social contract theory based on the relation between the absolute sovereign and civil society. Learning Objectives - Investigate how Hobbseian philosophy impacted European thought and politics. Key Terms / Key Concepts Thomas Hobbes: seventeenth-century English writer and philosopher Leviathan: work published by Hobbes that investigates the social contract Social Contract: Hobbes' theoretical idea about the rights and laws of the state vs. those of the individual Natural Rights: Hobbes' theoretical ideas about the individual in their “natural state” Thomas Hobbes Thomas Hobbes was one of the founders of modern political philosophy and political science. He also contributed to a diverse array of other fields, including history, geometry, the physics of gases, theology, ethics, and general philosophy. Background The Enlightenment has been hailed as the foundation of modern western political and intellectual culture. It brought political modernization to the west by introducing democratic values and institutions, as well as the creation of modern, liberal democracies. Thomas Hobbes, an English philosopher and scientist, was one of the key figures in the political debates of the period. Despite advocating the idea of sovereign absolutism, Hobbes developed some of the fundamentals of European liberal thought: the right of the individual; the natural equality of all men; the artificial character of the political order (which led to the later distinction between civil society and the state); the view that all legitimate political power must be “representative” and based on the consent of the people; and a liberal interpretation of law that leaves people free to do whatever the law does not explicitly forbid. Portrait of Thomas Hobbes. The Leviathan and Social Contract Hobbes was the first modern philosopher to articulate a detailed social contract theory, which appeared in his 1651 work Leviathan. In his book, Hobbes set out his doctrines of: the foundation of states, legitimate governments, and creating an objective science of morality. Because Leviathan was written during the English Civil War, much of the book is occupied with demonstrating the necessity of a strong central authority to avoid the evil of discord and civil war. Beginning from a mechanical understanding of human beings and their passions, Hobbes considers what life would be like without government, a condition which he calls the “state of nature.” In that state, each person would have a right, or license, to everything in the world. This, Hobbes argues, would lead to a “war of all against all.” In such a state, people fear death and lack both the things necessary to living and the hope of being able to work to obtain commodities. So, to avoid such a state, people agree to a social contract and establish a civil society. According to Hobbes, society is a population beneath a sovereign authority, to whom all individuals in that society give up some rights for the sake of protection. Any power exercised by this authority cannot be resisted because the protector’s sovereign power comes from individuals surrendering their individual power for protection. The individuals are thereby the authors of all decisions made by the sovereign. There is no doctrine of separation of powers in Hobbes’s discussion. According to Hobbes, the sovereign must control civil, military, judicial, and ecclesiastical powers. Illustration of The Leviathan by Thomas Hobbes. Natural Rights Hobbes also included a discussion of natural rights in his moral and political philosophy. His conception of natural rights extended from his conception of man in a “state of nature.” He argued that the essential natural (human) right was “to use his own power for the preservation of his own nature; that is to say, of his own Life […].” Hobbes sharply distinguished this natural “liberty” from natural “laws.” In his natural state, man’s life consisted entirely of liberties and not at all of laws, which leads to the world of chaos created by unlimited rights. Consequently, if humans wish to live peacefully, they must give up most of their natural rights and create moral obligations to establish a political and civil society. Hobbes objected to the attempt to derive rights from “natural law,” arguing that “law” and “right” are opposites. “Law” refers to obligations. “Right” refers to the absence of obligations. Since by our (human) nature, we seek to maximize our wellbeing, natural or institutional rights are superior to law. People will not follow the laws of nature without first being subjected to sovereign power. This marked an important departure from medieval natural law theories, which gave precedence to obligations over rights. Attributions Images courtesy of Wikimedia Commons Boundless World History "Enlightenment Thinkers" https://courses.lumenlearning.com/boundless-worldhistory/chapter/enlightenment-thinkers/
oercommons
2025-03-18T00:35:08.269995
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87874/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87875/overview
Absolutist Prussia, Austria and Russia Overview Absolutist Prussia Absolutism, in which a monarch holds unrestrained power, spread throughout Europe during the eighteenth century. Examples of absolutist governments include Austria under the Hapsburgs, Prussia under the Hohenzollerns, and Russia under Peter I and his Romanov successors. Learning Objectives Identify and connect major political events, characters, and turning points in Absolutist Prussia. Evaluate the domestic and foreign affairs of Frederick the Great Key Terms / Key Concepts Frederick the Great: skilled Prussian King who introduced numerous, successful internal reforms and successfully defeated the Austrians in the war of Austrian Succession Prussia: major, north German kingdom in the eighteenth century Junkers: landholding, Prussian aristocrats who held significant power in the eighteenth century Modernization of Prussia: set of internal, domestic reforms introduced under Frederick the Great Absolutism in Prussia: Frederick the Great In his youth, Frederick the Great was a sensitive man with tremendous appreciation for intellectual development, arts, and education. Despite his father’s fears, this did not prevent him from becoming a brilliant military strategist during his later reign as King of Prussia. Frederick the Great’s Childhood Frederick II, the son of Frederick William I and Sophia Dorothea of Hanover, was born in Berlin in 1712. His birth was particularly welcomed by his grandfather as his two previous grandsons both died in infancy. With the death of Frederick I in 1713, Frederick William became King of Prussia, thus making young Frederick the crown prince. Despite his father’s desire that his education be entirely religious and pragmatic, the young Frederick, with the help of his tutor Jacques Duhan, secretly procured a 3,000-volume library of poetry, Greek and Roman classics, and French philosophy to supplement his official lessons. As Frederick grew, his preference for music, literature, and French culture clashed with his father’s militarism, resulting in frequent beatings and humiliation from his father. Frederick as Crown Prince Frederick found an ally in his sister Wilhelmine, with whom he remained close for life. At age 16, he formed an attachment to the king’s 13-year-old page, Peter Karl Christoph Keith. Some biographers of Frederick, suggest that the attachment was of a sexual nature. As a result, Keith was sent away to an unpopular regiment near the Dutch frontier, while Frederick was temporarily sent to his father’s hunting lodge in order “to repent of his sin.” Around the same time, he became close friends with Hans Hermann von Katte. When he was 18, Frederick plotted to flee to England with Katte and other junior army officers. Frederick and Katte were subsequently arrested and imprisoned. Because they were army officers who had tried to flee Prussia for Great Britain, Frederick William leveled an accusation of treason against the pair. The king briefly threatened the crown prince with the death penalty, then considered forcing Frederick to renounce the succession in favor of his brother, Augustus William, although either option would have been difficult to justify. Instead, the king forced Frederick to watch the decapitation of Katte at Küstrin, leaving the crown prince to faint right before the fatal blow was struck. Frederick was granted a royal pardon and released from his cell, although he remained stripped of his military rank. Instead of returning to Berlin, he was forced to remain in Küstrin and began rigorous schooling in statecraft and administration. Tensions eased slightly when Frederick William visited Küstrin a year later and when Frederick was allowed to visit Berlin on the occasion of his sister Wilhelmine’s marriage to Margrave Frederick of Bayreuth in 1731. The crown prince returned to Berlin a year later. Frederick eventually married Elisabeth Christine of Brunswick-Bevern in 1733. She was a Protestant relative of the Austrian Habsburgs. He had little in common with his bride and resented the political marriage. Once Frederick secured the throne in 1740 after his father’s death, he immediately separated from his wife and prevented Elisabeth from visiting his court in Potsdam, granting her instead Schönhausen Palace and apartments at the Berliner Stadtschloss. In later years, Frederick would pay his wife formal visits only once a year. Frederick came to the throne with an exceptional inheritance: an army of 80,000 men. By 1770, after two decades of punishing war alternating with intervals of peace, Frederick doubled the size of the huge army, which during his reign would consume 86% of the state budget. Frederick Becomes Leader Prince Frederick was twenty-eight years old when he acceded to the throne of Prussia. His goal was to modernize and unite his vulnerably disconnected lands, and he largely succeeded through aggressive military and foreign policies. Contrary to his father’s fears, Frederick proved himself a courageous colonel of the army and an extremely skillful strategist. Napoleon Bonaparte considered the Prussian king as the greatest tactical genius of all time. After the Seven Years’ War, the Prussian military acquired a formidable reputation across Europe. Esteemed for their efficiency and success in battle, Frederick’s army became a model emulated by other European powers, most notably Russia and France. Frederick was also an influential military theorist whose ideas emerged from his extensive personal battlefield experience and covered issues of strategy, tactics, mobility and logistics. Despite his dazzling success as a military commander, however, Frederick was not a fan of warfare. Prussia under Frederick the Great Frederick the Great significantly modernized the Prussian economy, administration, judicial system, education, finance, and agriculture, but never attempted to change the social order based on the dominance of the landed nobility. The Modernization of Prussia As King of Prussia from 1740 until 1786, Frederick the Great helped transform Prussia from a European backwater to an economically strong and politically reformed state. During his reign, the effects of the Seven Years’ War and the gaining of Silesia greatly changed the economy. The conquest of Silesia gave Prussia’s industries access to raw materials and fertile agricultural lands. With the help of French experts, he organized a system of indirect taxation, which provided the state with more revenue than direct taxation. He also promoted the silk trade and opened a silk factory that employed 1,500 people. He protected Prussian industries with high tariffs and minimal restrictions on domestic trade. In 1781, Frederick decided to make coffee a royal monopoly. Disabled soldiers were employed to spy on citizens searching for illegally roasted coffee, much to the annoyance of the general population. Frederick reformed the judicial system and made it possible for men outside the nobility to become judges and senior bureaucrats. He also allowed freedom of speech, the press, and literature, and abolished most uses of judicial torture. Frederick laid the basic foundations of what would eventually become the Prussian primary education system. In 1763, he issued a decree for the first Prussian general school based on the principles developed by Johann Julius Hecker. In 1748, Hecker had founded the first teacher’s seminary in Prussia. The decree expanded the existing schooling system significantly and required that all young citizens, both girls, and boys, be educated from the age of five to thirteen or fourteen. Prussia was among the first countries in the world to introduce tax-funded and compulsory primary education. An important aspect of Frederick’s efforts is the absence of social order reform. In his modernization of military and administration, he relied on the class of Junkers, the Prussian land-owning nobility. Under his rule, they continued to hold their privileges, including the right to hold serfs. Frederick’s attempts to protect the peasantry from cruel treatment and oppression by landlords and lower their labor obligations never really succeeded because of the economic, political, and military influence the Junkers exercised. The Junkers controlled the Prussian army, leading in political influence and social status, and owned immense estates, especially in the northeastern half of Germany. Agriculture Frederick was keenly interested in land use, especially draining swamps and opening new farmland for colonizers who would increase the kingdom’s food supply. He called it “peopling Prussia.” About a thousand new villages were founded in his reign that attracted 300,000 immigrants from outside Prussia. Using improved technology enabled him to create new farmland through a massive drainage program in the country’s marshland. This strategy created roughly 150,000 acres of new farmland, but also eliminated vast swaths of natural habitat, destroyed the region’s biodiversity, and displaced numerous native plant and animal communities. Frederick saw this project as the “taming” and “conquering” of nature, which he regarded as “useless” and “barbarous” in its wild form. He presided over the construction of canals for bringing crops to market and introduced new crops, especially potato and turnip, to the country. Control of grain prices was one of Frederick’s greatest achievements in that it allowed populations to survive in areas where harvests were poor. Frederick also loved animals and founded the first veterinary school in Germany. Unusual for his time and aristocratic background, he criticized hunting as cruel, rough, and uneducated. Religious Policies While Frederick was largely non-practicing and tolerated all faiths in his realm, Protestantism became the favored religion, and Catholics were not chosen for higher state positions. Frederick was known to be more tolerant of Jews and Catholics than many neighboring German states, although he expressed strong antisemitic sentiments and, in territories taken over from Poland, persecuted Polish Roman Catholic churches by confiscating goods and property, exercising strict control of churches, and interfering in church administration. Like many leading figures in the Age of Enlightenment, Frederick was a Freemason, and his membership legitimized the group and protected it against charges of subversion. As Frederick made more wasteland arable, Prussia looked for new colonists to settle the land. To encourage immigration, he repeatedly emphasized that nationality and religion were of no concern to him. This policy allowed Prussia’s population to recover very quickly from the considerable losses it suffered during Frederick’s wars. The Death of Frederick the Great Frederick’s popularity continued in Prussia into the late eighteenth century. However, the King who had transformed Prussia gradually became more isolated and solitary. In August 1786, he died at home in Potsdam, at the age of seventy-two. Absolutist Austria The Holy Roman Empire was a multi-ethnic collection of territories in Central Europe that developed during the Early Middle Ages and continued until its dissolution in 1806. The term Holy Roman Empire was not used until the 13th century. As French philosopher and satirist, Voltaire, famously wrote of the entity in the eighteenth century, “The Holy Roman Empire was neither ‘Holy’, nor ‘Roman’, nor an ‘Empire’.” Learning Objectives - Evaluate the role of the Holy Roman Empire in the War of Austrian Succession. - Investigate the foreign and domestic achievements of Empress Maria Theresa and Emperor Joseph II. - Understand how Austria, and the Holy Roman Empire, constitute an absolutist society Key Terms / Key Concepts Holy Roman Empire: landlocked empire in central Europe occupying present-day southeastern Germany, Austria, western Poland, northern Italy, and Holland Austria: German-speaking, core kingdom within the Holy Roman Empire Hapsburgs: ruling family dynasty of the Holy Roman Empire Empress Maria Theresa: only female to ever hold the title, “Holy Roman Empress” who is remembered for her progressive reforms War of Austrian Succession: series of wars sparked by succession crisis in the Holy Roman Empire Emperor Joseph II: son of Maria Theresa, Holy Roman Emperor remembered as one of Europe’s best monarchs because of his progressive reforms Enlightened Despotism: Set of practices carried out by autocratic/despotic European monarchs who were influenced by the progressive reforms of the Enlightenment Josephism: set of practices/policies implemented by Holy Roman Emperor Joseph II Edict of Tolerance: decree under Joseph II that granted religious toleration in the Holy Roman Empire to Lutherans, Calvinists, Orthodox Serbs, and ultimately, Jews The Holy Roman Empire Traditionally, the office of Holy Roman Emperor was elective, although frequently controlled by dynasties, such as the Hapsburgs of Austria. The German prince-electors, the highest-ranking noblemen of the empire, usually elected one of their peers to be the emperor and he would later be crowned by the Pope. In time, the empire evolved into a decentralized, limited elective monarchy composed of hundreds of sub-units, principalities, duchies, counties, free imperial cities, and other domains. The power of the emperor was limited and while the various princes, lords, bishops and cities of the empire were vassals who owed the emperor their allegiance, they also possessed an extent of privileges that gave them de facto independence within their territories. The Hapsburgs and the Holy Roman Empire The Habsburgs held the title of Holy Roman Emperor between 1438 and 1740 and again from 1745 to 1806. Although one family held the title for centuries, the Holy Roman Emperor was elected and the position never became hereditary. This contrasted with the power that the Habsburgs held over territories under their rule, which did not overlap with the Holy Roman Empire. From the 16th century until the formal establishment of the Austrian Empire in 1804, those lands were unofficially called the Habsburg or Austrian Monarchy. They changed over the centuries, but the core always consisted of the Hereditary Lands (most of the modern states of Austria and Slovenia, as well as territories in northeastern Italy and southwestern Germany); the Lands of the Bohemian Crown; and the Kingdom of Hungary. Many other lands were also under Habsburg rule at one time or another. Empress Maria Theresa Maria Theresa was the only female to bear the title, "Holy Roman Empress," and also to wield the political power associated with that position. Law at the time forbade the ascension of women to the throne. Although technically a co-regent (along with her husband, Francis Stephen) of the Holy Roman Empire, Maria Theresa privately retained the power of her house. A supreme autocrat in charge of all decision-making regarding domestic and foreign affairs. She is widely remembered for her sweeping internal reforms in religion, education, and public health; and her role in the War of Austrian Succession. The War of Austrian Succession Frederick the Great’s 1740 invasion of resource-rich and strategically located Silesia, marked the onset of the War of Austrian Succession and aimed to unify the disconnected lands under Frederick’s rule. Background In 1740, Holy Roman Emperor Charles VI died. His daughter, Maria Theresa, succeeded him as ruler of the Hapsburg lands. She was not, however, a candidate for the title of Holy Roman Emperor, which had never been held by a woman. The plan was for her to succeed to the hereditary lands and her husband, Francis Stephen, would be elected Holy Roman Emperor. Also in 1740, Frederick the Great became King of Prussia. As such, a fight between the monarchs of the Holy Roman Empire and Prussia was imminent. Frederick was to rule Brandenburg because Prussia and Brandenburg, a kingdom in northern Germany, had maintained close connections since the early 17th century. But legally, Brandenburg was still part of the Holy Roman Empire. The War Consumes Europe Hoping to unify his disconnected lands and secure the prosperous, resource-rich, Austrian province of Silesia, Frederick disputed the succession of Maria Theresa. Instead, he made his own claim on Silesia. The War of Austrian Succession began on December 16, 1740, when Frederick invaded and quickly occupied Silesia. The War of the Austrian Succession (1740–1748) escalated and eventually involved most of the powers of Europe. Frederick the Great's repeated victories on the battlefields of Bohemia and Silesia forced his enemies to seek peace terms. Under the terms of the Treaty of Dresden, signed in December 1745, Austria gave Silesia to Prussia. In exchange, Frederick recognized Maria Theresa’s husband/consort—Francis I—as the Holy Roman Emperor. Maria Theresa officially gained the title of "Holy Roman Empress" by being married to her husband, the emperor. Despite being the "emperor's wife," the real power of the monarchy was held by Maria-Theresa. She remained responsible for all decisions, spoke with court advisors, and determined royal decrees. Maria Theresa's Domestic Reforms Religion Maria Theresa was a devout Roman Catholic. Consequently, she explicitly rejected the idea of religious toleration but never allowed the Church to interfere with what she considered to be the prerogatives of a monarch. She controlled the selection of religious officials within the Holy Roman Empire. The empress supported conversion to Roman Catholicism. She tolerated Greek Catholics and emphasized their equal status with Roman Catholics. Convinced by her advisors that the Jesuits posed a danger to her monarchical authority, she hesitantly issued a decree that removed them from all the institutions of the monarchy. Though she eventually gave up trying to convert her non-Catholic subjects to Roman Catholicism, Maria Theresa regarded both the Jews and Protestants as dangerous to the state and actively tried to suppress them. The empress was arguably the most anti-Semitic monarch of her time yet like many of her contemporaries, she supported Jewish commercial and industrial activity. Administrative and State Reforms Maria Theresa implemented significant reforms to strengthen Austria’s military and bureaucratic efficiency. She employed Count Friedrich Wilhelm von Haugwitz, who modernized the empire by creating a standing army of 108,000 men. Under Haugwitz, she centralized administration with permanent civil service. She also oversaw the unification of the Austrian and Bohemian chancelleries in May 1749 and doubled the state revenue between 1754 and 1764. These financial reforms greatly improved the economy. In 1760, Maria Theresa created the council of state, which served as a committee of experienced people who advised her. The council lacked executive or legislative authority but nevertheless was distinguishable from the form of government employed by Frederick II of Prussia. Unlike the latter, Maria Theresa was not an autocrat who acted as her own minister. Public Health Maria Theresa invested in reforms that advanced public health. She recruited Gerard van Swieten, who founded the Vienna General Hospital, revamped Austria’s educational system, and served as the Empress’s personal physician. After calling in van Swieten, Maria Theresa asked him to study the problem of infant mortality in Austria. Following his recommendation, she made a decree that autopsies would be mandatory for all hospital deaths in Graz, Austria’s second-largest city. This law – still in effect today – combined with the relatively stable population of Graz, resulted in one of the most important and complete autopsy records in the world. Maria Theresa banned the creation of new burial grounds without prior government permission, thus countering wasteful and unhygienic burial customs. Her decision to have her children inoculated after the smallpox epidemic of 1767 was responsible for changing Austrian physicians’ negative view of inoculation. Education Aware of the inadequacy of bureaucracy in Austria, Maria Theresa reformed education in 1775. In a new school system, all children of both genders had to attend school between ages six and twelve. Education reform was met with much hostility. Maria Theresa crushed the dissent by ordering the arrest of those who opposed. The reforms, however, were not as successful as expected since no funding was offered from the state, education in most schools remained substandard, and in many parts of the empire forcing parents to send their children to school was ineffective. The empress permitted non-Catholics to attend university and allowed the introduction of secular subjects such as law, which influenced the decline of theology as the main foundation of university education. Educational reform also included that of Vienna University by Swieten from 1749, the founding of the Theresianum (1746) as a civil service academy, and other new military and foreign service academies. Maria Theresa's Later Years Maria Theresa was devastated by her husband’s death in 1765. She abandoned all ornamentation, had her hair cut short, painted her rooms black, and dressed in mourning for the rest of her life. She completely withdrew from court life, public events, and theater. She described her state of mind shortly after Francis’s death: “I hardly know myself now, for I have become like an animal with no true life or reasoning power.” Following Francis' death, their eldest son, Joseph, became Holy Roman Emperor. A New Light for the Hapsburgs: Holy Roman Emperor Joseph II As a proponent of enlightened despotism, Joseph II introduced a series of reforms that affected nearly every realm of life in his empire; however, his commitment to modernization caused significant opposition to his plans, which eventually led to a failure to fully implement his programs. Rise of Joseph II Joseph II was Holy Roman Emperor from 1765 to 1790. He was the eldest son of Maria Theresa and her husband, Francis I. As women were never elected to be Holy Roman Emperor, Joseph took the title after his father’s death in 1765 yet it was his mother who remained the ruler of the Habsburg lands. However, Maria Theresa, devastated after her husband’s death and always relying on the help of advisors, declared Joseph to be her new co-ruler the same year. From then on, mother and son had frequent ideological disagreements. Joseph often threatened to resign as co-regent and emperor. When Maria Theresa died in 1780, Joseph became the absolute ruler over the most extensive realm of Central Europe. Joseph, deeply interested in the ideals of the Enlightenment, was always positive that the rule of reason would produce the best possible results in the shortest time. He issued 6,000 edicts in all and 11,000 new laws designed to regulate and reorder every aspect of the empire. He intended to improve his subjects’ lives but strictly in accordance with his own criteria. This made him one of the most committed enlightened despots. Josephism Josephism, as his policies were called, is notable for the very wide range of reforms designed to modernize the creaky empire in an era when France and Prussia were rapidly advancing. However, it elicited grudging compliance at best and more often vehement opposition from all sectors in every part of his empire. Joseph set about building a rational, centralized, and uniform government for his diverse lands but with himself as supreme autocrat. No parliament existed to challenge his policies. He expected government servants to all be dedicated agents of Josephism and selected them without favor for class or ethnic origins. Promotion was solely by merit. To impose uniformity, he made German the compulsory language of official business throughout the Empire. Tax and Land Reform In 1781, Joseph issued the Serfdom Patent, which aimed to abolish aspects of the traditional serfdom system and to establish basic civil liberties for the serfs. The Patent granted the serfs some legal rights in the Habsburg monarchy, but it did not affect the financial dues and the unpaid labor that the serfs legally owed to their landlords. In practice, it did not abolish serfdom; rather, it expanded selected rights of serfs. Joseph II recognized the importance of further reforms, continually attempting to destroy the economic subjugation through related laws, such as his Tax Decree of 1789. This new law would have finally realized Emperor Joseph II’s ambition to modernize Habsburg society, allowing for the end of corvée and the beginning of lesser tax obligations. Despite the attempts to improve the fate of the peasantry, Joseph’s land reforms met with the resistance of the landed nobility. Serfdom was not abolished in the Empire until 1848. Joseph inspired a complete reform of the legal system, abolished brutal punishments and the death penalty in most instances, and imposed the principle of complete equality of treatment for all offenders. He also ended censorship of the press and theater. Public Health and Education Joseph continued education and public health reforms initiated by his mother. To produce a literate citizenry, elementary education was made compulsory for all boys and girls and higher education on practical lines was offered for a select few. Joseph created scholarships for talented poor students and allowed the establishment of schools for Jews and other religious minorities. In 1784, he ordered that the country change its language of instruction from Latin to German, a highly controversial step in a multilingual empire. By the eighteenth century, centralization was the trend in medicine because more and better-educated doctors were requesting improved facilities. Cities lacked the budgets to fund local hospitals and the monarchy wanted to end costly epidemics and quarantines. Joseph attempted to centralize medical care in Vienna through the construction of a single, large hospital, the famous Allgemeines Krankenhaus, which opened in 1784. Centralization worsened sanitation problems causing epidemics and a 20% death rate in the new hospital. However, the city became preeminent in the medical field in the next century. Religion The most unpopular of all his reforms was his attempt to modernize the highly traditional Catholic Church. Clergymen were deprived of the tithe and ordered to study in seminaries under government supervision, while bishops had to take a formal oath of loyalty to the crown. As a man of the Enlightenment, Joseph ridiculed the rigid church orders. He suppressed a third of the monasteries (over 700 were closed) and reduced the number of monks and nuns from 65,000 to 27,000. Marriage was defined as a civil contract outside the jurisdiction of the Church. Joseph also sharply cut the number of holy days to be observed in the Empire and forcibly simplified the way the Mass was celebrated. Opponents of the reforms insisted they revealed Protestant tendencies, along with the rise of Enlightenment rationalism and the emergence of a liberal class of bourgeois officials. Joseph’s enlightened despotism also included the Patent of Toleration in 1781 and the Edict of Tolerance in 1782. The Patent granted religious freedom to the Lutherans, Calvinists, and Serbian Orthodox, but it wasn’t until the 1782 Edict of Tolerance that Joseph II extended religious freedom to the Jewish population. Providing the Jewish subjects of the Empire with the right to practice their religion came with the assumption that the freedom would gradually force Jewish men and women into the mainstream German culture. While it allowed Jewish children to attend schools and universities, adults to engage in jobs from which there had been excluded, and all Jewish men and women not to wear gold stars that marked their identity, it also stipulated that the Jewish languages—the written language Hebrew and the spoken language Yiddish—were to be replaced by the national language of the country. Official documents and school textbooks could not be printed in Hebrew. Absolutist Russia Absolutist Russia is characterized by the reign of Peter I (Peter the Great). Peter's years as tsar were marked by power struggles, Peter’s European travels, sweeping domestic reform, and territorial expansion. Learning Objectives - Identify the major domestic reforms introduced by Peter I of Russia - Evaluate how Russia was an absolutist society Key Terms / Key Concepts Peter I (Peter the Great): Romanov tsar of Russia who introduced significant internal reforms during the eighteenth century Romanov: Imperial family of Russia from the seventeenth to twentieth centuries Tsar: the Russian emperor Westernization of Russia: Peter the Great’s internal reforms that sought to turn Russia into a country socially and military akin to those in Western Europe Boyars: Russian nobles Serfs: Russian peasantry who were forced to work (primarily in agriculture) on estates of the boyars beard-tax: tax implemented by Peter I in which men who wore long beards had to pay a tax as part of Peter's westernization of Russia Great Northern War: war between Russia and its allies; and Sweden that established Russia as a dominant naval power in Eastern Europe Eastern Orthodox Church: branch of Christianity separate from Catholicism that is traditionally practiced in Eastern Europe, including Greece and Russia Holy Synod: governing body of the Russian Orthodox Church under Peter I that blended secular and clerical committee members Saint Petersburg: Russian capital city located on the Baltic Sea founded by Peter the Great Russia under Peter I Background Tsar Peter I (Peter the Great) was a member of the Romanov family who ruled Russia and later the Russian Empire from 1682 until his death, jointly ruling before 1696 with his elder half-brother, Ivan V. The Romanovs took over Russia in 1613, and the first decades of their reign were marked by attempts to restore peace, both internally and with Russia’s rivals, most notably Poland and Sweden. To avoid more civil war, the Boyars cooperated with the first Romanovs, enabling them to finish the work of bureaucratic centralization. Thus, the state required service from both the old and the new nobility; primarily in the military. In return, the tsars allowed the boyars to enserf the peasants. With the state now fully sanctioning forced labor on the noble's estates, serf rebellions were rampant. Peter the Great’s Childhood From an early age, Peter’s education was put in the hands of several tutors. In 1676, Peter’s father Tsar Alexis died, leaving the throne to Peter’s elder half-brother Feodor III. Throughout this period, the government was largely run by Artamon Matveev—an enlightened friend of Alexis, one of Peter’s greatest childhood benefactors. This changed when Feodor died without an heir in 1682. A dispute immediately arose between the Miloslavsky family and the Naryshkin family over who should inherit the throne. Peter’s other half-brother, Ivan V, was next in line for the throne, but he was chronically ill. Consequently, the Russian council (Duma) chose 10-year-old Peter to become tsar, with his mother as regent. Taking Power While Peter was not particularly concerned that others ruled in his name, his mother sought to force him to adopt a more conventional approach. She arranged his marriage to Eudoxia Lopukhina in 1689, but the marriage was a failure. Ten years later Peter forced his wife to become a nun and thus freed himself from the union. By the summer of 1689, Peter planned to take power from his half-sister Sophia, whose position had been weakened by two unsuccessful Crimean campaigns. After a power struggle, Sophia was eventually overthrown, with Peter I and Ivan V continuing to act as co-tsars. Still, Peter was not able to acquire actual control over Russian affairs. When Nataliya died in 1694 Peter became an independent ruler, and, after his brother Ivan’s death in 1696, the sole ruler. Early Reign and the "Westernization of Russia." Peter implemented sweeping reforms designed to modernize Russia in ways that modeled Western Europe's social and military structures. His advisors--largely from Western Europe--argued that Russia lagged two hundred years behind the rest of Europe in terms of its societal development. This argument proved quintessential to Peter. He refused to accept Russia's status as a large but backward and underdeveloped country. Peter Restructures the Russian Military Background One of the primary threats to the Russian Empire was the Ottoman Empire (present-day Turkey). Peter knew that Russia could not face the Ottoman Empire alone. In 1697 he traveled Europe. Keeping the tsar's journey a secret was essential for his protection but also challenging. The black-haired, athletic tsar stood nearly seven feet tall and always traveled with an entourage of a few hundred servants and advisors. Equally identifying was Peter's volatile, but passionate temperament which he inflamed by indulging in alcohol. Still, the tsar embarked incognito on an eighteen-month journey with a large Russian delegation to seek the aid of the European monarchs. However, the mission failed, as Europe was at the time preoccupied with the question of the Spanish succession. Peter’s visit was cut short in 1698 when he was forced to rush home because of an internal rebellion. The rebellion was easily crushed, and Peter acted ruthlessly towards the mutineers. Over 1,200 of the rebels were tortured and executed, and Peter ordered that their bodies be publicly exhibited as a warning to future conspirators. Although Peter’s delegation failed to complete its political mission of creating an anti-Ottoman alliance, Peter continued the European trip, learning about life in Western Europe. He learned the shipbuilding craft in Holland in 1697. While visiting the Netherlands, he studied shipbuilding and visited families of art and coin collectors. From Dutch experts, craftsmen, and artists, Peter learned how to draw teeth, catch butterflies, and paint seascapes. In England, he also engaged in painting and navy-related activities. He visited Manchester in order to learn the techniques of city building that he would later use to great effect at Saint Petersburg. Furthermore, in 1698 Peter sent a delegation to Malta to observe the training and abilities of the Knights of Malta and their fleet. Peter Restructures the Russian Military In 1699, Peter prioritized restructuring the Russian military. Whereas it had previously been disorganized, small, and poorly trained, Peter transformed it. Having born witness to, and heard of Western armies from his advisors, Peter dramatically increased its size by creating a standing army of over 130,000 soldiers. When recruits could not be found, Peter drafted serfs. Each of the new soldiers received uniform training and severe discipline, thereby creating strong camaraderie and strength among the units. Additionally, he created two separate, elite units. The result was a large and strong Russian army that was on par with its western counterparts. Similarly, Russia had no navy before Peter I. Inspired by his visit to England where he had studied the English navy, Peter sought to develop Russian naval power. In 1703, the Russian Baltic Sea Fleet was founded and later expanded. Naval schools were established where sailors were taught navigation, astronomy, and mathematics, as well as military tactics. By the end of Peter's reign, roughly 30,000 sailors were in the Russian navy. Peter the Great’s Foreign Policies Great Northern War Between 1560 and 1658, Sweden created a Baltic empire centered on the Gulf of Finland. Peter I wanted to re-establish a Baltic presence by regaining access to the territories that Russia had lost to Sweden in the first decades of the seventeenth century. In 1700, Peter, supported by his Danish and Norweigian allies, declared war on Sweden. Sweden parried the Danish and Russian attacks. Charles XII moved from Saxony into Russia to confront Peter, but the campaign ended with the destruction of the main Swedish army at the decisive 1709 Battle of Poltava. The last city, the Swedish-held city, Riga (present-day Latvia) fell to the Russians in 1710. Sweden proper was invaded from the west by Denmark and Norway and from the east by Russia, which had occupied Finland by 1714. The Danish forces were defeated. Swedish king Charles XII opened up a Norwegian front, but he was killed in 1718. The war ended with Sweden’s defeat, leaving Russia as the new dominant power in the Baltic region and a major force in European politics. The formal conclusion of the war was marked by the Swedish–Hanoverian and Swedish–Prussian Treaties of Stockholm and the Russo–Swedish Treaty of Nystad. In all of them, Sweden ceded some territories to its opponents. As a result, Russia gained vast Baltic territories and became one of the greatest powers in Europe. Peter's Domestic Reforms Background By the time Peter the Great became tsar, Russia was the largest country in the world, stretching from the Baltic Sea to the Pacific Ocean. Much of Russia’s expansion had taken place in the seventeenth century, culminating in the first Russian settlement of the Pacific in the mid-seventeenth century. However, most of the land was unoccupied, travel was slow, and most of the fourteen million citizens were farmers. Russian agriculture, with its short growing season, was ineffective and lagged behind that of Western Europe. And Russia remained isolated from the sea trade, and its internal trade communications and many manufactures were dependent on the seasonal changes. Peter Implements Change at Home Peter I was a strong reformer who implemented modernized Russia in many ways, but he was also a ruthless autocrat. His visits to the West impressed upon him the notion that European customs were superior to Russian traditions. Unlike most of his predecessors and successors, he attempted to follow Western European traditions, fashions, and tastes. He also sought to end arranged marriages, which were the norm among the Russian nobility, because he thought such a practice was barbaric and led to domestic violence. He forced social modernization at home by introducing French and western dress to his court. Courtiers, state officials, and the members of the military were now forced to shave their beards, abandon traditional Russian clothing, and wear western European clothing styles. To achieve this goal, Peter introduced taxes on long beards and traditional Russian robes in September 1698. The beard-tax incited the boyars who had worn robes and long beards for centuries. For Peter, their outrage was a victory. He saw the boyars as outdated, irrelevant, and an internal threat to his reign. They opposed westernization and promoted Russian traditionalism. Reducing their influence became a central goal for Peter. He introduced numerous taxes that directly targeted the boyars and required numerous services of them. Finance Peter’s government was constantly in dire need of money. At first, it responded by monopolizing highly-valuable industries, such as salt, vodka, oak, and tar. Peter also taxed many Russian cultural customs and issued tax stamps for paper goods. However, with each new tax came new loopholes and new ways to avoid them, and so it became clear that tax reform was simply not enough. The solution was a sweeping new poll tax, which replaced a household tax on cultivated land. Now, each peasant was assessed individually for a tax paid in cash. This new tax was significantly heavier than the taxes it replaced, and it enabled the Russian state to expand its treasury almost sixfold between 1680 and 1724. Peter also pursued protective trade policies, placing heavy tariffs on imports and trade to maintain a favorable environment for Russian-made goods. Subjugation of the Peasants Peter’s reign deepened the subjugation of serfs by landowners. He firmly enforced class divisions and his tax code significantly expanded the number of taxable workers, shifting an even heavier burden onto the shoulders of the working class. Legislation under Peter’s rule covered every aspect of life in Russia with exhaustive detail, and it significantly affected the everyday lives of nearly every Russian citizen. The success of reform contributed greatly to Russia’s military successes and the increase in revenue and productivity. More importantly, Peter created a state that further legitimized and strengthened authoritarian rule in Russia. Testaments to this lasting influence are the many public institutions in the Soviet Union and the Russian Federation, which trace their origins back to Peter’s rule. Church Reforms The Russian tsars traditionally exerted some influence on church operations. However, until Peter’s reforms, the church had been relatively free to operate as it saw fit. Peter lost the support of the Russian clergy over his modernizing reforms because priests and churches became very suspicious of his friendship with foreigners and his alleged Protestant leanings. The tsar did not abandon Orthodoxy as the main ideological core of the state, but he attempted to start a process of westernization of the clergy, relying on those with Western theological education. Simultaneously, Peter remained faithful to the canons of the Eastern Orthodox Church. The traditional leader of the church was the Patriarch of Moscow. In 1700, when the office became vacant, Peter refused to name a replacement and created the position of "the custodian of the patriarchal throne", which he controlled by appointing his own candidates. He could not tolerate the thought that a patriarch could have power superior to the tsar. In 1721, he established the Holy Synod that replaced the Patriarch. It was administered by an educated, but secular director. The Synod changed in composition over time, but it remained a committee of churchmen headed by an appointee of the emperor. Furthermore, a new ecclesiastic educational system was begun under Peter. It aimed to improve the usually very poor education of local priests and monks. However, the curriculum was so westernized that monks and priests, while being formally educated, received poor training in preparation for a ministry to a Russian-speaking population steeped in the traditions of Eastern Orthodoxy. Saint Petersburg In 1703, during the Great Northern War, Peter the Great established the Peter and Paul fortress on small Hare Island, by the north bank of the Neva River. The fortress was the first brick and stone building of the new projected capital city of Russia and the original citadel of what would eventually be Saint Petersburg. The city was built by conscripted peasants from all over Russia, and tens of thousands of serfs died building it. Peter moved the capital from Moscow to Saint Petersburg in 1712 but referred to Saint Petersburg as the capital as early as 1704. Succession Peter had two wives, with whom he had fourteen children, but only three survived to adulthood. Upon his return from his European tour in 1698, he ended his unhappy, arranged marriage to Eudoxia Lopukhina. He divorced the empress and forced her into joining a convent. Only one child from the marriage, Tsarevich Alexei, survived past his childhood. In 1712, Peter formally married his long-time mistress, Martha Skavronskaya, who upon her conversion to the Russian Orthodox church took the name Catherine. Peter suspected his eldest child and heir, Alexei, of being involved in a plot to overthrow the emperor. Alexei was tried and confessed under torture during questioning conducted by a secular court. He was convicted and sentenced to be executed. The sentence could be carried out only with Peter’s signed authorization, but Peter hesitated before making the decision and Alexei died in prison. In 1724, Peter had his second wife, Catherine, crowned as empress, although he remained Russia’s actual ruler. He died a year later without naming a successor. As Catherine represented the interests of the “new men,” (commoners who had been brought to positions of great power by Peter based on competence), a successful coup was arranged by her supporters to prevent the old elites from controlling the laws of succession. Catherine was the first woman to rule Imperial Russia (as empress), opening the legal path for a century almost entirely dominated by women, including her daughter Elizabeth and granddaughter-in-law Catherine the Great, all of whom continued Peter the Great’s policies in modernizing Russia. Primary Source: "On Forms of Government" (by Frederick II) Frederick II of Prussia (r. 1740-1786), “Essay on the Forms of Government” [Abridged] A sovereign must possess an exact and detailed knowledge of the strong and of the weak points of his country. He must be thoroughly acquainted with its resources, the character of the people. and the national commerce.... Rulers should always remind themselves that they are men like the least of their subjects. The sovereign is the foremost judge, general, financier, and minister of his country, not merely for the sake of his prestige. Therefore, he should perform with care the duties connected with these offices. He is merely the principal servant of the State. Hence, he must act with honesty, wisdom, and complete disinterestedness in such a way that he can render an account of his stewardship to the citizens at any moment. Consequently, he is guilty if he wastes the money of the people, the taxes which they have paid, in luxury, pomp and debauchery. He who should improve the morals of the people, be the guardian of the law, and improve their education should not pervert them by his bad example. Princes, sovereigns, and king have not been given supreme authority in order to live in luxurious self-indulgence and debauchery. They have not been elevated by their fellow-men to enable them to strut about and to insult with their pride the simple-mannered, the poor and the suffering. They have not been placed at the head of the State to keep around themselves a crowd of idle loafers whose uselessness drives them towards vice. The bad administration which may be found in monarchies springs from many different causes, but their principal cause lies in the character of the sovereign. A ruler addicted to women will become a tool of his mistresses and favourites, and these will abuse their power and commit wrongs of every kind, will protect vice, sell offices, and perpetrate every infamy.... The sovereign is the representative of his State. He and his people form a single body. Ruler and ruled can be happy only if they are firmly united. The sovereign stands to his people in the same relation in which the head stands to the body. He must use his eyes and his brain for the whole community, and act on its behalf to the common advantage. If we wish to elevate monarchical above republican government, the duty of sovereigns is clear. They must be active, hard-working, upright and honest, and concentrate all their strength upon filling their office worthily. That is my idea of the duties of sovereigns. From Modern History Sourcebook, Fordham University Attributions Images from Wikimedia Commons Boundless World History "Frederick the Great and Prussia" https://courses.lumenlearning.com/boundless-worldhistory/chapter/frederick-the-great-and-prussia/ "The Holy Roman Empire" https://courses.lumenlearning.com/boundless-worldhistory/chapter/the-holy-roman-empire-2/ "The Modernization of Russia" https://courses.lumenlearning.com/boundless-worldhistory/chapter/the-modernization-of-russia/ "On Forms of Government." Frederick II. Fordham University. Internet History Sourcebooks (fordham.edu)
oercommons
2025-03-18T00:35:08.338228
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87875/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87876/overview
Maintaining the Balance of Power in Europe: The Seven Years War Overview Maintaining the Balance of Power in Europe: The Seven Years War The Seven Years' War is often nicknamed, "The First World War." For though it proceeded World War I by more than a century, it involved many of the world's strongest empires and was fought around the world. Learning Objectives - Analyze the global impact of the Seven Years’ War. Key Terms / Key Concepts Seven Years’ War: the official name of the eighteenth-century war fought between all of the major European powers, all over the world Jumonville Glen: forested area about an hour south of Pittsburg, PA where the official start of the “French and Indian War” began French and Indian War: nine-year conflict in North America between the French, English, and their respective Native American allies Treaty of Paris (1763): peace treaty which ends the Seven Years’ War A Global Conflict: The Seven Years’ War The Seven Years’ War was fought between 1756 and 1763. It involved every European great power of the time except the Ottoman Empire and spanned five continents, as well as affected Europe, the Americas, West Africa, India, and the Philippines. The Seven Years’ War: Opponents at War Although technically fought for nine years, the Seven Years' War was primarily fought in Europe from 1756 to 1763. The war split Europe into two sides. The first, led by Great Britain and its allies—primarily Prussia under Frederick the Great and the German electorate of Hanover. The second by France and its allies—primarily Sweden, Russia, Austria, and the German principality of Saxony. Opening Chapter: The French and Indian War On May 28, 1754, a young major from Virginia named George Washington led a small group of English soldiers and Seneca Indians to a remote part of the Pennsylvania frontier. French soldiers had positioned themselves close to the English colony, which was too close for the comfort of the Virginia governor. Washington was ordered to negotiate the French withdrawal. The small group of French soldiers were encamped in a clearing in the Pennsylvania woods, known now as Jumonville Glen, just south of present-day Pittsburg. There, Washington’s soldiers and Seneca allies quickly encircled the French. A skirmish erupted between the French and Washington, with his Seneca allies. Specific details of the events remain contested. However, at the end of the skirmish, over twenty Frenchmen were killed, many of them having previously surrendered. Washington, confused and uncertain about what had occurred, beat a hasty retreat to the English fort: Fort Necessity. Eighteenth-century English politician and writer Horace Walpole later said of Washington’s actions, “The volley fired by a young Virginian in the backwoods of America set the world on fire.” The Jumonville Glen massacre, as it is now remembered, was the opening act in both the French and Indian War in North America and the world conflict fought globally between the English and French throughout their empires: the Seven Years’ War. For the subsequent nine years (1754 – 1763), the French and Indian War, so named by the English, was fought up and down the American frontier and in parts of eastern Canada. Caught in the middle of the conflict were the Native Americans who often felt they must choose to militarily ally with either the French or English. The war initially favored the French, who had stronger Native American alliances, better organization, and more prepared military leaders. With the capture of Quebec and the French port-city Louisbourg in Newfoundland, the French capitulated. The English had won against the North American conflict. But war against France still waged around the world. Europe Two years after the French and Indian War began in North America, war erupted in Europe over territorial disputes. Primarily, the European conflict raged between the Austrian Hapsburgs and the Prussians under Frederick the Great. Each side was supported by a complex system of alliances. In the end, the sweeping battles that occurred across Western and Central Europe were costly in terms of finances and human lives, but mostly indecisive. South America In the Fantastic War (1762 – 63) in South America, Spanish forces conquered the Portuguese territories of Colonia del Sacramento and Rio Grande de São Pedro. They forced the Portuguese to surrender and retreat. Under the Treaty of Paris (1763), Spain had to return the colony of Sacramento to Portugal. India In India, the outbreak of the war in Europe renewed the long-running conflict between the French and the British trading companies for influence. The war spread beyond Southern India and into Bengal, eventually eliminating French power in India. West Africa In West Africa in 1758, the British captured Senegal and brought home large amounts of captured goods. This success convinced the British to launch two further expeditions to take the island of Gorée and the French trading post on the Gambia. The loss of these valuable colonies further weakened the French economy. The Treaty of Paris (1763) The Treaty of Paris was signed on February 10, 1763 by the kingdoms of England, France, and Spain, as well as with Portugal in agreement after Great Britain’s victory over France and Spain during the Seven Years’ War. The signing of the treaty formally ended the Seven Years’ War and marked the beginning of British dominance outside Europe. The treaty did not involve Prussia and Austria, as they signed a separate agreement five days later: the Treaty of Hubertusburg. Attributions Images courtesy of Wikimedia Commons Boundless World History "The Seven Years War" https://courses.lumenlearning.com/boundless-worldhistory/chapter/the-seven-years-war/ "The War of Spanish Succession" https://courses.lumenlearning.com/boundless-worldhistory/chapter/war-of-spanish-succession/
oercommons
2025-03-18T00:35:08.364337
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87876/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87881/overview
Age of Discovery: Exploration Overview Context and Background of the Age of Discovery The Age of Discovery—or Age of Exploration—occurred within the larger context of European expansion during the second half of the Middle Ages. The Age of Discovery continued into the early modern period. Learning Objectives Identify the dynamics of trade and political power that led to European exploration of the New World. Describe the significance of great explorers such as da Gama, Columbus, and Magellan and how their voyages changed Europe’s conception of the globe and of their world. Understand the impact of the arrival of Europeans on native cultures and how the native rulers in the Americas tried to use the arrival of the Europeans for their own political ends. Assess the impact of the Columbian Exchange on both the New World and the Old from an environmental, demographic, ecological, social, and economic perspective. Key Terms / Key Concepts Marco Polo: Venetian merchant and explorer who travelled across Asia during the last third of the thirteenth century, and inspired later explorers, such as Christopher Columbus Norse Explorers Norsemen became the first Europeans to strike out westward across the Atlantic Ocean during the ninth century. Norse explorers reached Iceland during the ninth century, Greenland during the tenth century, and North America at the turn of the eleventh century. While they withdrew from North America later in the eleventh century, they foreshadowed the Age of Discovery/Exploration that began in the fourteenth century. Prelude to the Age of Discovery A prelude to the Age of Discovery was a series of European land expeditions across Eurasia in the late Middle Ages. These expeditions occurred within the context of late medieval European economic development and growth, along with a budding sense of curiosity about the world fostered by new universities. And at the end of the eleventh century the initiation of the Crusades exposed Europeans to new opportunities for trade from the eastern shore of the Mediterranean across Asia, particularly merchants from the Italian city-states. European medieval knowledge about Asia came from reports dating back from the time of the conquests of Alexander the Great (~323 BCE). An updated notion of the world was provided in 1154, when Arab geographer Muhammad al-Idrisi created the Tabula Rogeriana at the bequest of King Roger II of Sicily. The resulting manuscript, written in Arabic, is a description of the world, as was known to al-Idrisi. It contained maps showing the Eurasian continent but did not include anything past the northern part of the African continent. It remained the most accurate world map for the next three centuries, but it also demonstrated that the southern extent of Africa was only partially known by European and Arab seamen at that time. A series of European expeditions crossing Eurasia by land in the late Middle Ages also marked a prelude to the Age of Discovery. Although the Mongols had threatened Europe with pillage and destruction, Mongol states unified much of Eurasia and allowed safe trade routes and communication lines stretching from the Middle East to China by 1206. A series of Europeans took advantage of these in order to explore eastward. Most were Italians, as trade between Europe and the Middle East was controlled mainly by the Maritime republics. During the Mongol invasions of Syria, Christian embassies were sent as far as Karakorum, from this they gained a greater understanding of the world. The first of these travelers was Giovanni da Pian del Carpine, who journeyed to Mongolia and back from 1241 to 1247. About the same time, Russian prince Yaroslav of Vladimir traveled to the Mongolian capital. His sons later did the same. These expeditions are thought to have had strong political implications, but they did not result in detailed accounts. Marco Polo, a Venetian merchant, dictated an account of journeys throughout Asia from 1271 to 1295. His travels are recorded in Book of the Marvels of the World, (also known as The Travels of Marco Polo, c. 1300), a book which did much to introduce Europeans to Central Asia and China. Marco Polo was not the first European to reach China, but he was the first to leave a detailed chronicle of his experience. The book inspired Christopher Columbus and many other travelers. The Travels of Marco Polo: Marco Polo traveling, miniature from the book The Travels of Marco Polo (Il milione), originally published during Polo’s lifetime (c. 1254 – January 8, 1324), but frequently reprinted and translated. From the thirteenth through the fifteenth centuries others explored portions of Africa and Eurasia in personal travels, leaving accounts in the process. During the thirteenth century André de Longjumeau of France and Flemish William of Rubruck reached Mongol-controlled China through Central Asia. From 1325 to 1354, Ibn Battuta—a Moroccan scholar from Tangier—journeyed extensively through Europe, Africa, the Middle East, and Asia, recording his impressions in his account. Between 1405 and 1421, Ma Huan, a Muslim voyager and translator, reported on a series of long-range tributary missions sponsored by the Yongle Emperor of Ming China; this provided knowledge of Arabia, East Africa, India, Maritime Southeast Asia, and Thailand. In 1439, Niccolò de’ Conti published an account of his travels to India and Southeast Asia. And Russian merchant Afanasy Nikitin of Tver travelled to India from 1466 to 1472. The Age of Discovery The geographical exploration of the late Middle Ages eventually led to what today is known as the Age of Discovery: a loosely defined European historical period that took place from the 15th century to the 18th century, during which extensive overseas exploration emerged as a powerful factor in European culture and globalization. Global exploration started in 1498 with the successful Portuguese travels to the Atlantic archipelagos of Madeira and the Azores, the coast of Africa, and the sea route to India, as well as between 1492 and 1502 with the trans-Atlantic Voyages of Christopher Columbus and the first circumnavigation of the globe between 1519 and 1522. These discoveries led to numerous naval expeditions across the Atlantic, Indian, and Pacific oceans, as well as land expeditions in the Americas, Asia, Africa, and Australia that continued into the late 19th century. This period ends with the exploration of the polar regions in the 20th century. Many lands previously unknown to Europeans were discovered during this period, though most were already inhabited. From the perspective of non-Europeans, the period was not one of discovery, but one of invasion. Portuguese Exploration During the 15th and 16th centuries, Portuguese explorers were at the forefront of European overseas exploration, which led them to reach India, establish multiple trading posts in Asia and Africa, and settle what would become Brazil. As a result, Portugal created one of the most powerful empires. Portuguese sailors were at the vanguard of European overseas exploration, discovering and mapping the coasts of Africa, Asia, and Brazil. As early as 1317, King Denis made an agreement with Genoese merchant-sailor Manuel Pessanha (Pesagno), appointing him first Admiral with trade privileges for his homeland in return for twenty manned warships. With this agreement, Portugal hoped to defend against Muslim pirate raids. This created the basis for the Portuguese Navy and the establishment of a Genoese merchant community in Portugal. In the second half of the 14th century, outbreaks of bubonic plague led to severe depopulation in Portugal. During this time, the economy was extremely localized in a few towns, unemployment rose, and migration led to agricultural land abandonment. Only the sea offered alternatives, with most people settling in fishing and trading in coastal areas. Between 1325 and 1357, Afonso IV of Portugal granted public funding to raise a proper commercial fleet, and he ordered the first maritime explorations under the command of admiral Pessanha, with the help of Genoese. In 1341, the Canary Islands, already known to Genoese, were officially explored under the patronage of the Portuguese king. In 1344, Castile disputed Portugal’s efforts, further propelling the Portuguese navy efforts. Learning Objectives Identify the dynamics of trade and political power that led to European exploration of the New World. Describe the significance of great explorers such as da Gama, Columbus, and Magellan and how their voyages changed Europe’s conception of the globe and of their world. Understand the impact of the arrival of Europeans on native cultures and how the native rulers in the Americas tried to use the arrival of the Europeans for their own political ends. Assess the impact of the Columbian Exchange on both the New World and the Old from an environmental, demographic, ecological, social, and economic perspective. Key Terms / Key Concepts Prince Henry the Navigator: royal sponsor of Portuguese voyages of exploration down the west African coast during the first half of the fifteenth century Vasco da Gama: first European explorer to reach India by sailing around Africa spice trade - lucrative trade in exotic, “eastern” spices, such as nutmeg Pedro Alvares Cabral - Portuguese mariner who conducted the first significant European exploration of the northeastern coast of South America in 1500 Ferdinand Magellan - Portuguese mariner who led the first European expedition to sail around the world, 1519-22 Atlantic Exploration In 1415, the city of Ceuta (north coast of Africa) was occupied by the Portuguese aiming to control navigation of the African coast. Young Prince Henry the Navigator was there and became aware of profit possibilities in the Saharan trade routes. He invested in sponsoring voyages down the coast of Mauritania, which led to his gathering a group of merchants, shipowners, stakeholders, and participants interested in the sea lanes. Within two decades of exploration, Portuguese ships bypassed the Sahara. At the time, Europeans did not know what lay beyond Cape Bojador on the African coast. In 1419, two of Henry’s captains—João Gonçalves Zarco and Tristão Vaz Teixeira—were driven by a storm to Madeira, an uninhabited island off the coast of Africa that had probably been known to Europeans since the 14th century. In 1420, Zarco and Teixeira returned with Bartolomeu Perestrelo and began Portuguese settlement of the islands. A Portuguese attempt to capture Grand Canary, one of the nearby Canary Islands that had been partially settled by Spaniards in 1402, was unsuccessful and met with protests from Castile. Around the same time, the Portuguese began to explore the North African coast. Diogo Silves reached the Azores islands of Santa Maria in 1427, and in the following years, Portugal discovered and settled the rest of the Azores. In 1443, Prince Pedro, Henry’s brother, granted him the monopoly of navigation, war, and trade in the lands south of Cape Bojador. This monopoly would later be enforced by two Papal bulls (1452 and 1455), giving Portugal the trade monopoly for the newly appropriated territories and laying the foundations for the Portuguese empire. Until his death in 1460, Henry the Navigator took the lead role in encouraging Portuguese maritime exploration. India and Brazil The long-standing Portuguese goal of finding a sea route to Asia was finally achieved in a ground-breaking voyage commanded by Vasco da Gama. His squadron left Portugal in 1497, rounded the Cape and continued along the coast of East Africa. Then, a local pilot was brought on board who guided them across the Indian Ocean. In May 1498, da Gama reached Calicut in western India. Reaching the legendary Indian spice routes unopposed helped the Portuguese improve their economy that, until Gama, was mainly based on trades along Northern and coastal West Africa. These Indian spices were at first mostly pepper and cinnamon, but soon included other products new to Europe. This led to a commercial monopoly for several decades. Gama’s voyage was significant and paved the way for the Portuguese to establish a long-lasting colonial empire in Asia. The route meant that the entire voyage would be made by sea and that the Portuguese would not need to cross the highly disputed Mediterranean, nor the dangerous Arabian Peninsula. The second voyage to India was dispatched in 1500 under Pedro Alvares Cabral. While following the same south-westerly route as Gama across the Atlantic Ocean, Cabral made landfall on the Brazilian coast. This was probably an accident, but it has been speculated that the Portuguese had already known of Brazil’s existence. Cabral recommended to the Portuguese king that the land be settled, and two follow-up voyages were sent in 1501 and 1503. The land was found to be abundant in pau-brasil, or brazilwood, from which it later inherited its name, but the failure to find gold or silver in Brazil meant the Portuguese efforts were concentrated on India. Indian Ocean and Southeast Asia Explorations The aim of Portugal in the Indian Ocean was to ensure the monopoly of the spice trade. Taking advantage of the rivalries that pitted Hindus against Muslims, the Portuguese established several forts and trading posts between 1500 and 1510. After the victorious sea Battle of Diu, Turks and Egyptians withdrew their navies from India, which allowed for Portuguese trade dominance for almost a century and greatly contributed to the growth of the Portuguese Empire. It also marked the beginning of the European colonial dominance in Asia. A second Battle of Diu in 1538 ended Ottoman ambitions in India and confirmed Portuguese hegemony in the Indian Ocean. In 1511, the governor of Portuguese India Afonso de Albuquerque sailed to Malacca in Malaysia, the most important eastern point in the trade network. Malacca is where Malay traders met with Gujarati, Chinese, Japanese, Javan, Bengali, Persian, and Arabic traders. Upon Albuquerque’s capture, the port of Malacca became the strategic base for Portuguese trade expansion with China and Southeast Asia. Eventually, the Portuguese Empire expanded into the Persian Gulf as Portugal contested control of the spice trade with the Ottoman Empire. In a shifting series of alliances, the Portuguese dominated much of the southern Persian Gulf for the next hundred years. From 1519 to 1522 Ferdinand Magellan—a Portuguese explorer funded by the Spanish Crown—organized the Castilian (Spanish) expedition to the East Indies. Selected by King Charles I of Spain to search for a westward route to the Maluku Islands—the “Spice Islands” or today’s Indonesia, Magellan headed south through the Atlantic Ocean to Patagonia, passing through the Strait of Magellan into a body of water he named the “peaceful sea” (the modern Pacific Ocean). Despite a series of storms and mutinies, the expedition reached the Spice Islands in 1521; they later returned home via the Indian Ocean to complete the first circuit of the globe. After Magellan’s expedition, Spain, under Charles V, sent an expedition to colonize the Maluku islands in 1525. With this move by Spain, conflict with the Portuguese was inevitable. When García Jofre de Loaísa reached the islands it started nearly a decade of skirmishes. A peace accord was reached in 1529 when the Treaty of Zaragoza attributed the Maluku to Portugal and the Philippines to Spain. How Portugal became the first European imperial sea power: Pick your adjective for the monster wave McNamara rode in January just off the Portuguese coast near Nazare. The Portuguese explorer, Vasco da Gama, came to Nazare, too, to pray before he set out in 1497—and again after a successful return from his voyage to find a sea route to India with its rich spice trade. He did what Christopher Columbus had tried to do but failed. Casimiro said that as a country, Portugal turns to the sea: “Our backs are turned to the land, and we are always looking at the sea. We have that kind of impulse to see what is after that.” Even if it’s frightening? “Yeah.” Portugal is a country where the sea is and always has been regarded as a living being—to be stared down, confronted. In the process of becoming an imperial sea power Portugal established trading ports at far-flung locations like Goa, Ormuz, Malacca, Kochi, the Maluku Islands, Macau, and Nagasaki. Guarding its trade from both European and Asian competitors, it dominated not only the trade between Asia and Europe, but also much of the trade between different regions of Asia, such as India, Indonesia, China, and Japan. Jesuit missionaries followed the Portuguese to spread Roman Catholic Christianity to Asia, with mixed success. Spanish Exploration The voyages of Christopher Columbus initiated the European exploration and colonization of the American continents that eventually turned Spain into the most powerful European empire. While Portugal led European explorations of non-European territories, its Iberian rival Castile embarked upon its own mission to create an overseas empire. Castile began to establish its rule over the Canary Islands, located off the West African coast, in 1402; however, Castile became distracted from exploration through most of the 15th century because of internal Iberian politics and the repelling of Islamic invasion and raid attempts. Only late in the century, following the unification of the crowns of Castile and Aragon and the completion of the reconquista, did an emerging modern Spain become fully committed to the search for new trade routes overseas. In 1492, the joint rulers, Ferdinand of Aragon and Isabel of Castile, conquered the Moorish kingdom of Granada, which had been providing Castile with African goods through tribute. Then, they decided to fund Christopher Columbus’s expedition. King John II of Portugal rejected Columbus’s plan two times, in 1485 and 1488, before the Spanish rulers later financed it in the hopes of reaching “the Indies” (east and south Asia) by traveling west and bypassing Portugal’s monopoly on west African sea routes. Learning Objectives Identify the dynamics of trade and political power that led to European exploration of the New World. Describe the significance of great explorers such as da Gama, Columbus, and Magellan and how their voyages changed Europe’s conception of the globe and of their world. Understand the impact of the arrival of Europeans on native cultures and how the native rulers in the Americas tried to use the arrival of the Europeans for their own political ends. Assess the impact of the Columbian Exchange on both the New World and the Old from an environmental, demographic, ecological, social, and economic perspective. Key Terms / Key Concepts Christopher Columbus: Genoese explorer credited with the discovery of the Americas Ferdinand of Aragon and Isabel of Castile - Spanish monarchs who sponsored Columbus' 1492 expedition Treaty of Tordesillas:1494 treaty that divided those parts of the world not yet explored purposefully by Europeans between Portugal and Spain Ferdinand Magellan - Portuguese mariner who led the first European expedition to sail around the world, 1519-22 Columbus’s Voyages On the evening of August 3, 1492, Columbus departed from Palos de la Frontera with three ships: Santa María, Pinta (the Painted) and Santa Clara. Columbus first sailed to the Canary Islands, where he restocked for what turned out to be a five-week voyage across the ocean, crossing a section of the Atlantic that became known as the Sargasso Sea. Land was sighted on October 12, 1492. And Columbus, thinking he found the “West Indies,” called the island (now The Bahamas) San Salvador. He also explored the northeast coast of Cuba and the northern coast of Hispaniola. Columbus left 39 men behind and founded the settlement of La Navidad in what is now Haiti. Following the first American voyage, Columbus made three more. During his second voyage in 1493, he enslaved 560 native Americans, despite the Queen’s explicit opposition to the idea. The transport of these enslaved natives to Spain resulted in the death and disease of hundreds of the captives. In 1498, Columbus left port again with a fleet of six ships. The object of this third voyage was to verify the existence of a continent that King John II of Portugal claimed was located to the southwest of the Cape Verde Islands. He explored the Gulf of Paria, which separates Trinidad from mainland Venezuela, and then the mainland of South America. Columbus described these new lands as belonging to a previously unknown new continent, but he pictured them hanging from China. Finally, the fourth voyage left Spain in 1502, nominally in search of a westward passage to the Indian Ocean. Columbus spent two months exploring the coasts of the modern nations of Honduras, Nicaragua, and Costa Rica, before arriving in Almirante Bay, Panama. After his ships sustained serious damage in a storm off the coast of present-day Cuba, Columbus and his men remained stranded on Jamaica for a year. Help finally arrived and Columbus and his men arrived back in Castile in November 1504. The Treaty of Tordesillas Shortly after Columbus’s arrival from the “West Indies,” a division of influence became necessary to avoid conflict between the Spanish and Portuguese. An agreement was reached in 1494 with the Treaty of Tordesillas, which divided the world between the two powers. In the treaty, the Portuguese received everything outside Europe east of a line that ran 370 leagues west of the Cape Verde islands (already in control of the Portuguese). This gave Portugal control over Africa, Asia, and eastern South America (Brazil). On the other hand, the Spanish (Castile) received everything west of this line; this included territory that proved to be mostly the western part of the Americas, plus the Pacific Ocean islands and the islands reached by Christopher Columbus on his first voyage—Cuba, and Hispaniola. Further Explorations of the Americas After Columbus, the Spanish colonization of the Americas was led by a series of soldier-explorers called conquistadors. The Spanish forces, with the help of significant armament and equestrian advantages, exploited the rivalries between competing indigenous peoples, tribes, and nations. Some of the indigenous tribes were willing to form alliances with the Spanish in order to defeat their more powerful enemies, such as the Aztecs and Incas. Creating these alliances with native tribes is a tactic that would be extensively used by later European colonial powers. The Spanish conquest was also facilitated by the spread of diseases common in Europe but never present in the New World (e.g., smallpox), which reduced the indigenous populations in the Americas. This caused labor shortages for plantations and public works, which led to the colonists initiating the Atlantic slave trade. One of the most accomplished conquistadors was Hernán Cortés. Cortés led a relatively small Spanish force, but he achieved the Spanish conquest of the Aztec Empire (present day Mexico) in the campaigns of 1519 – 1521. Of equal importance was the Spanish conquest of the Inca Empire. After years of preliminary exploration and military skirmishes, 168 Spanish soldiers under Francisco Pizarro, along with their native allies, captured the Sapa Inca Atahualpa in the 1532 Battle of Cajamarca. It was the first step in a long campaign that took decades of fighting, but the campaign ended in 1572 with Spanish victory and colonization of the region, which was later referred to as the Viceroyalty of Peru. The conquest of the Inca Empire led to spin-off campaigns into present-day Chile and Colombia, as well as expeditions towards the Amazon Basin. In 1522 the Portuguese Ferdinand Magellan commanded a Castilian expedition that was the first to circumnavigate the globe. Magellan died while in the Philippines, but the Basque Juan Sebastián Elcano led the expedition to success. This led to Spain’s attempt to enforce their rights in the Moluccan islands, which led to a conflict with the Portuguese. The issue was finally resolved with the Treaty of Zaragoza in 1525. Further Spanish settlements were progressively established in the New World: New Granada in the 1530s (later in the Viceroyalty of New Granada in 1717 and present-day Colombia); Lima in 1535 as the capital of the Viceroyalty of Peru; Buenos Aires in 1536 (later in the Viceroyalty of the Río de la Plata in 1776); and Santiago in 1541. Florida was colonized in 1565 by Pedro Menéndez de Avilés. In 1565, the first permanent Spanish settlement in the Philippines was founded by Miguel López de Legazpi and the service of Manila Galleons was inaugurated. The Manila Galleons shipped goods from all over Asia across the Pacific to Acapulco on the coast of Mexico. From there, the goods were transshipped across Mexico to the Spanish treasure fleets then later shipped to Spain. The Spanish trading post of Manila was established to facilitate this trade in 1572. English Exploration Throughout the 17th century, the British established numerous successful American colonies and dominated the Atlantic slave trade, which eventually led to creating the most powerful European empire. The foundations of the British Empire were laid when England and Scotland were separate kingdoms. In 1496, King Henry VII of England, following the successes of Spain and Portugal in overseas exploration, commissioned John Cabot (Venetian born as Giovanni Caboto) to discover a route to Asia via the North Atlantic. Spain put limited efforts into exploring the northern part of the Americas, as its resources were concentrated in Central and South America, where more wealth had been found. Cabot sailed in 1497, five years after Europeans reached America; although Cabot successfully made landfall on the coast of Newfoundland there was no attempt to found a colony. He mistakenly believed, as Columbus had, that he had reached Asia. Cabot led another voyage to the Americas the following year, but nothing was heard of his ships again. Learning Objectives Identify the dynamics of trade and political power that led to European exploration of the New World. Describe the significance of great explorers such as da Gama, Columbus, and Magellan and how their voyages changed Europe’s conception of the globe and of their world. Understand the impact of the arrival of Europeans on native cultures and how the native rulers in the Americas tried to use the arrival of the Europeans for their own political ends. Assess the impact of the Columbian Exchange on both the New World and the Old from an environmental, demographic, ecological, social, and economic perspective. The Early Empire No further attempts to establish English colonies in the Americas were made until well into the reign of Queen Elizabeth I, during the last decades of the 16th century. In the meantime, the Protestant Reformation had turned England and Catholic Spain into implacable enemies. In 1562, the English Crown encouraged the privateers John Hawkins and Francis Drake to engage in slave-raiding attacks against Spanish and Portuguese ships off the coast of West Africa, with the aim of breaking into the Atlantic trade system. Drake carried out the second circumnavigation of the world in a single expedition from 1577 to 1580, and he was the first to complete the entire voyage as captain. With his incursion into the Pacific, he inaugurated an era of privateering and piracy off the western coast of the Americas—an area that had previously been free of piracy. In 1578, Elizabeth I granted a patent to Humphrey Gilbert for discovery and overseas exploration. That year, Gilbert sailed for the West Indies with the intention of engaging in piracy and establishing a colony in North America, but the expedition was aborted before it had crossed the Atlantic. In 1583, he embarked on a second attempt to the island of Newfoundland whose harbor he formally claimed for England, although no settlers were left behind. Gilbert did not survive the return journey to England; he was succeeded by his half-brother, Walter Raleigh, who was granted his own patent by Elizabeth in 1584. Later that year, Raleigh founded the colony of Roanoke on the coast of present-day North Carolina, but lack of supplies caused the colony to fail. Empire in the Americas In 1603, James VI of Scotland ascended to the English throne as James I of England. In 1604 James I negotiated the Treaty of London, ending hostilities with Spain. Now at peace with its main rival, England’s attention shifted from preying on other nations’ colonial interests to the business of establishing its own overseas colonies. The Caribbean initially provided England’s most important and lucrative colonies. Colonies in Guiana, St Lucia, and Grenada failed, but settlements were successfully established in St. Kitts (1624), Barbados (1627), and Nevis (1628). The colonies soon adopted the system of sugar plantations, successfully used by the Portuguese in Brazil, which depended on slave labor. And they initially relied on Dutch ships to sell the slaves and buy the sugar. To ensure that the increasingly healthy profits of this trade remained in English hands, Parliament established the 1651 Navigation Acts, in which they decreed that only English ships would be able to ply their trade in English colonies. In 1655, England annexed the island of Jamaica from the Spanish; and in 1666 it succeeded in colonizing the Bahamas. In 1672, the Royal African Company was inaugurated, receiving from King Charles a monopoly of the trade to supply slaves to the British colonies of the Caribbean. From the outset, slavery was the basis of the British Empire in the West Indies and later in North America. Until the abolition of the slave trade in 1807, Britain was responsible for the transportation of 3.5 million African slaves to the Americas. Passage of the Navigation Acts by the English Parliament during the Commonwealth period led to war with the Dutch Republic. During the Commonwealth period, 1649-60, England was under the rule of Oliver Cromwell who had led opponents of King Charles I in his overthrow, as part of the English Civil War. In the early stages of this First Anglo-Dutch War (1652 – 1654), the superiority of the large, heavily armed English ships was offset by superior Dutch tactical organization. English tactical improvements resulted in a series of crushing victories in 1653, bringing peace on favorable terms. On the English side, this was the first war fought largely by purpose-built, state-owned warships. After the English monarchy was restored in 1660, at the conclusion of the Commonwealth period, Charles II re-established the navy, which became a national institution but carried the title of “The Royal Navy.” England’s first permanent settlement in the Americas was founded in 1607 in Jamestown, led by Captain John Smith and managed by the Virginia Company. Bermuda was accidentally settled and claimed by England in 1609 because the Virginia Company’s flagship had shipwrecked there. Soon after, English colonies were created, mainly due to a desire for freedom of religion. The Virginia Company’s charter was revoked in 1624 and direct control of Virginia was assumed by the crown, thereby founding the Colony of Virginia. In 1620, Plymouth was founded as a haven for puritan religious separatists, later known as the Pilgrims. Fleeing from religious persecution would become the motive of many English would-be colonists to risk the arduous trans-Atlantic voyage: Maryland (1634) was founded as a haven for Roman Catholics; Rhode Island (1636) as a colony tolerant of all religions; and Connecticut (1639) for Congregationalists. The Province of Carolina was founded in 1663. With the surrender of Fort Amsterdam in 1664, England gained control of the Dutch colony of New Netherland, renaming it New York. In 1681, the colony of Pennsylvania was founded by William Penn. The American colonies were less financially successful than those of the Caribbean, but they had large areas of good agricultural land and attracted far larger numbers of English emigrants who preferred their temperate climates. From the outset, slavery was the basis of the British Empire in the West Indies. Until the abolition of the slave trade in 1807, Britain was responsible for a third of all slaves transported across the Atlantic. In the British Caribbean, the percentage of the population of African descent rose from 25% in 1650 to around 80% in 1780. And in the 13 Colonies it rose from 10% to 40% over the same period (the majority in the southern colonies). For the slave traders, the trade was extremely profitable, and it became a major economic mainstay. Although Britain was relatively late in its efforts to explore and colonize the New World, lagging behind Spain and Portugal, it eventually gained significant territories in North America and the Caribbean. French Exploration France established colonies in North America, the Caribbean, and India in the 17th century. While it lost most of its American holdings to Spain and Great Britain before the end of the 18th century, it eventually expanded its Asian and African territories in the 19th century. Learning Objectives Identify the dynamics of trade and political power that led to European exploration of the New World. Describe the significance of great explorers such as da Gama, Columbus, and Magellan and how their voyages changed Europe’s conception of the globe and of their world. Understand the impact of the arrival of Europeans on native cultures and how the native rulers in the Americas tried to use the arrival of the Europeans for their own political ends. Assess the impact of the Columbian Exchange on both the New World and the Old from an environmental, demographic, ecological, social, and economic perspective. The French in the New World: New France France began to establish colonies in North America, the Caribbean, and India in the 17th century. The French first came to the New World as explorers, seeking wealth and a route to the Pacific Ocean. Major French exploration of North America began under the rule of Francis I of France. In 1524, Francis sent Italian-born Giovanni da Verrazzano to explore the region between Florida and Newfoundland for a route to the Pacific Ocean. Verrazzano gave the names Francesca and Nova Gallia to the land between New Spain and English Newfoundland, thus promoting French interests. In 1534, Francis sent Jacques Cartier on the first of three voyages to explore the coast of Newfoundland and the St. Lawrence River. Cartier founded New France by planting a cross on the shore of the Gaspé Peninsula. He is believed to have accompanied Verrazzano to Nova Scotia and Brazil, and he was the first European to travel inland in North America. He claimed what is now Canada for France and named the Gulf of Saint Lawrence “The Country of Canadas,” using an Iroquois word. In 1541 he attempted to create the first permanent European settlement in North America at Cap-Rouge (Quebec City) with 400 settlers, but the settlement was abandoned the next year. A number of other failed attempts to establish French settlement in North America followed throughout the rest of the 16th century. Through alliances with various Native American tribes, the French were able to exert a loose control over much of the North American continent, but areas of French settlement were generally limited to the St. Lawrence River Valley. Prior to the establishment of the 1663 Sovereign Council, the territories of New France were developed as mercantile colonies. It was only after 1665 that France gave its American colonies the proper means to develop populated colonies comparable to that of the British. By the first decades of the 18th century, the French created and controlled such colonies as Quebec, La Baye des Puants (present-day Green Bay), Ville-Marie (Montreal), Fort Pontchartrain du Détroit (modern-day Detroit), La Nouvelle Orléans (New Orleans), and Baton Rouge. However, there was relatively little interest in colonialism in France, which instead concentrated on dominance within Europe. For most of its history, New France was far behind the British North American colonies in both population and economic development. In 1699, French territorial claims in North America expanded, with the foundation of Louisiana in the basin of the Mississippi River. The extensive trading network throughout the region connected to Canada through the Great Lakes and was maintained through a vast system of fortifications, many of them centered in the Illinois Country and in present-day Arkansas. New France was the area colonized by France in North America during a period beginning with the exploration of the Saint Lawrence River by Jacques Cartier in 1534 and ending with the cession of New France to Spain and Great Britain in 1763. At its peak in 1712, the territory of New France extended from Newfoundland to the Rocky Mountains, and from Hudson Bay to the Gulf of Mexico, including all the Great Lakes of North America. The West Indies As the French empire in North America grew, the French also began to build a smaller but more profitable empire in the West Indies. Settlement along the South American coast in what is today French Guiana began in 1624, and a colony was founded on Saint Kitts in 1625. Colonies in Guadeloupe and Martinique were founded in 1635 and on Saint Lucia in 1650. The food-producing plantations of these colonies were built and sustained through slavery and was dependent on the African slave trade. France’s most important Caribbean colonial possession was established in 1664, when the colony of Saint-Domingue (today’s Haiti) was founded on the western half of the Spanish island of Hispaniola. In the 18th century, Saint-Domingue grew to be the richest sugar colony in the Caribbean. The eastern half of Hispaniola (today’s Dominican Republic) also came under French rule for a short period, after being given to France by Spain in 1795. In the middle of the 18th century, a series of colonial conflicts began between France and Britain; these conflicts ultimately resulted in the near complete expulsion of France from the Americas and the destruction of most of the first French colonial empire. Africa and Asia French colonial expansion wasn’t limited to the New World. In Senegal in West Africa, the French began to establish trading posts along the coast in 1624. In 1664, the French East India Company was established to compete for trade in the east. In 1830, with the decay of the Ottoman Empire, the French seized Algiers, thus beginning the colonization of French North Africa. Colonies were also established in India at Chandernagore (1673) and Pondichéry (1674). Later colonies were added at Yanam (1723), Mahe (1725), and Karikal (1739). Finally, colonies were founded in the Indian Ocean, on the Île de Bourbon (Réunion, 1664), Isle de France (Mauritius, 1718), and the Seychelles (1756). While the French never rebuilt its American gains, their influence in Africa and Asia expanded significantly over the course of the 19th century. Attributions Licenses and Attributions CC licensed content, Shared previously Curation and Revision. Provided by: Boundless.com. License: CC BY-SA: Attribution-ShareAlike CC licensed content, Specific attribution Title Image - "Landing of Columbus" by Albert Bierstadt. Attribution: Albert Bierstadt, Public domain, via Wikimedia Commons. Provided by: Wikipedia. Location: https://commons.wikimedia.org/wiki/File:Bierstadt_Albert_The_Landing_of_Columbus.jpg. License: CC BY-SA: Attribution-ShareAlike Maritime Republics. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Maritime_republics. License: CC BY-SA: Attribution-ShareAlike Marco Polo. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Marco_Polo. License: CC BY-SA: Attribution-ShareAlike Age of Discovery. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Age_of_Discovery. License: CC BY-SA: Attribution-ShareAlike Pax Mongolica. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Pax_Mongolica. License: CC BY-SA: Attribution-ShareAlike Tabula Rogeriana. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Tabula_Rogeriana. License: CC BY-SA: Attribution-ShareAlike Marco Polo traveling. Provided by: Wikimedia . Located at: http://commons.wikimedia.org/wiki/File:Marco_Polo_traveling.JPG. License: Public Domain: No Known Copyright Portugese Exploration. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Portuguese_discoveries. License: CC BY-SA: Attribution-ShareAlike Cape of Good Hope. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Cape_of_Good_Hope. License: CC BY-SA: Attribution-ShareAlike Vasco da Gama. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Vasco_da_Gama. License: CC BY-SA: Attribution-ShareAlike Age of Discovery. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Age_of_Discovery#Portuguese_exploration. License: CC BY-SA: Attribution-ShareAlike Ferdinand Magellan. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Ferdinand_Magellan. License: CC BY-SA: Attribution-ShareAlike Marco Polo traveling. Provided by: Wikimedia . Located at: http://commons.wikimedia.org/wiki/File:Marco_Polo_traveling.JPG. License: Public Domain: No Known Copyright Gama Route 1. Provided by: Wikimedia . Located at: http://commons.wikimedia.org/wiki/File:Gama_route_1.png. License: CC BY-SA: Attribution-ShareAlike How Portugal became the first global sea power. Located at: http://www.youtube.com/watch?v=dcdO0QTmxIU. License: Public Domain: No Known Copyright. License Terms: Standard YouTube license Age of Discovery. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Age_of_Discovery. License: CC BY-SA: Attribution-ShareAlike Spanish conquest of the Aztec Empire. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Spanish_conquest_of_the_Aztec_Empire. License: CC BY-SA: Attribution-ShareAlike Voyages of Christopher Columbus. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Voyages_of_Christopher_Columbus. License: CC BY-SA: Attribution-ShareAlike Christopher Columbus. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Christopher_Columbus. License: CC BY-SA: Attribution-ShareAlike Treaty of Zaragoza. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Treaty_of_Zaragoza. License: CC BY-SA: Attribution-ShareAlike Treaty of Tordesillas. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Treaty_of_Tordesillas. License: CC BY-SA: Attribution-ShareAlike Reconquista. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Reconquista. License: CC BY-SA: Attribution-ShareAlike Spanish Empire. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Spanish_Empire. License: CC BY-SA: Attribution-ShareAlike Spanish conquest of the Inca Empire. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Spanish_conquest_of_the_Inca_Empire. License: CC BY-SA: Attribution-ShareAlike Spanish colonization of the Americas. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Spanish_colonization_of_the_Americas. License: CC BY-SA: Attribution-ShareAlike Marco Polo traveling. Provided by: Wikimedia . Located at: http://commons.wikimedia.org/wiki/File:Marco_Polo_traveling.JPG. License: Public Domain: No Known Copyright Gama Route 1. Provided by: Wikimedia . Located at: http://commons.wikimedia.org/wiki/File:Gama_route_1.png. License: CC BY-SA: Attribution-ShareAlike How Portugal became the first global sea power. Located at: http://www.youtube.com/watch?v=dcdO0QTmxIU. License: Public Domain: No Known Copyright. License Terms: Standard YouTube license First Voyage, Departure for the New World, August 3, 1492. Provided by: Wikimedia Commons. Located at: http://commons.wikimedia.org/wiki/File:First_Voyage,_Departure_for_the_New_World,_August_3,_1492.jpg. License: Public Domain: No Known Copyright Navigation Acts. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Navigation_Acts. License: CC BY-SA: Attribution-ShareAlike Francis Drake. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Francis_Drake. License: CC BY-SA: Attribution-ShareAlike First Anglo-Dutch War. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/First_Anglo-Dutch_War. License: CC BY-SA: Attribution-ShareAlike Age of Discovery. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Age_of_Discovery. License: CC BY-SA: Attribution-ShareAlike Royal Navy. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Royal_Navy. License: CC BY-SA: Attribution-ShareAlike Jamestown, Virginia. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Jamestown,_Virginia. License: CC BY-SA: Attribution-ShareAlike Roanoke Colony. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Roanoke_Colony. License: CC BY-SA: Attribution-ShareAlike Plymouth Colony. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Plymouth_Colony. License: CC BY-SA: Attribution-ShareAlike British Empire. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/British_Empire. License: CC BY-SA: Attribution-ShareAlike Marco Polo traveling. Provided by: Wikimedia . Located at: http://commons.wikimedia.org/wiki/File:Marco_Polo_traveling.JPG. License: Public Domain: No Known Copyright Gama Route 1. Provided by: Wikimedia . Located at: http://commons.wikimedia.org/wiki/File:Gama_route_1.png. License: CC BY-SA: Attribution-ShareAlike How Portugal became the first global sea power. Located at: http://www.youtube.com/watch?v=dcdO0QTmxIU. License: Public Domain: No Known Copyright. License Terms: Standard YouTube license First Voyage, Departure for the New World, August 3, 1492. Provided by: Wikimedia Commons. Located at: http://commons.wikimedia.org/wiki/File:First_Voyage,_Departure_for_the_New_World,_August_3,_1492.jpg. License: Public Domain: No Known Copyright Tobacco_cultivation_Virginia_ca._1670.jpg. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/British_Empire#/media/File:Tobacco_cultivation_(Virginia,_ca._1670).jpg. License: CC BY-SA: Attribution-ShareAlike 1280px-British_colonies_1763-76_shepherd1923.PNG. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/British_Empire#/media/File:British_colonies_1763-76_shepherd1923.PNG. License: Public Domain: No Known Copyright Mercantilism. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Mercantilism. License: CC BY-SA: Attribution-ShareAlike French colonization of the Americas. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/French_colonization_of_the_Americas. License: CC BY-SA: Attribution-ShareAlike New France. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/New_France. License: CC BY-SA: Attribution-ShareAlike French colonial empire. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/French_colonial_empire. License: CC BY-SA: Attribution-ShareAlike Sovereign Council of New France. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Sovereign_Council_of_New_France. License: CC BY-SA: Attribution-ShareAlike Age of Discovery. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Age_of_Discovery. License: CC BY-SA: Attribution-ShareAlike Carib Expulsion. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Carib_Expulsion. License: CC BY-SA: Attribution-ShareAlike Marco Polo traveling. Provided by: Wikimedia . Located at: http://commons.wikimedia.org/wiki/File:Marco_Polo_traveling.JPG. License: Public Domain: No Known Copyright Gama Route 1. Provided by: Wikimedia . Located at: http://commons.wikimedia.org/wiki/File:Gama_route_1.png. License: CC BY-SA: Attribution-ShareAlike How Portugal became the first global sea power. Located at: http://www.youtube.com/watch?v=dcdO0QTmxIU. License: Public Domain: No Known Copyright. License Terms: Standard YouTube license First Voyage, Departure for the New World, August 3, 1492. Provided by: Wikimedia Commons. Located at: http://commons.wikimedia.org/wiki/File:First_Voyage,_Departure_for_the_New_World,_August_3,_1492.jpg. License: Public Domain: No Known Copyright Tobacco_cultivation_Virginia_ca._1670.jpg. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/British_Empire#/media/File:Tobacco_cultivation_(Virginia,_ca._1670).jpg. License: CC BY-SA: Attribution-ShareAlike 1280px-British_colonies_1763-76_shepherd1923.PNG. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/British_Empire#/media/File:British_colonies_1763-76_shepherd1923.PNG. License: Public Domain: No Known Copyright 1024px-Nouvelle-France_map-en.svg.png. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/French_colonization_of_the_Americas#/media/File:Nouvelle-France_map-en.svg. License: CC BY-SA: Attribution-ShareAlike Cartier.png. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/French_colonization_of_the_Americas#/media/File:Cartier.png. License: Public Domain: No Known Copyright
oercommons
2025-03-18T00:35:08.426313
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87881/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87882/overview
European Explorers’ Links to the Islamic World Overview European Explorers’ Links to the Islamic World One of the foundations of the European Age of Exploration consisted of links between European explorers and the Islamic World. These links were grounded in European trade aspirations across Asia and the evolving competition between Europe and the Islamic world, as defined by their religious differences. This competition is all the more ironic when considering that Christian Europe and the Islamic World worshipped the same god. Learning Objectives - Identify the dynamics of trade and political power that led to European exploration of the New World. - Assess the contributions of these three empires to the early-modern world. Key Terms / Key Concepts Pax Mongolica - also known as the Mongol Peace, a system of relationships across Mongol-dominated Asia that allowed trade, technologies, commodities, and ideologies to be disseminated and exchanged across Eurasia Marco Polo - Venetian merchant and explorer who travelled across Asia during the last third of the thirteenth century, and inspired later explorers, such as Christopher Columbus Europe’s Early Trade Links A prelude to the Age of Discovery was a series of European land expeditions across Eurasia in the late Middle Ages. These expeditions were undertaken by a number of explorers, including Marco Polo, who left behind a detailed and inspiring record of his travels across Asia. These expeditions also reinforced links between European explorers and the Islamic World. Background European medieval knowledge about Asia, beyond the reach of Byzantine Empire, was sourced in partial reports, often obscured by legends, dating back from the time of the conquests of Alexander the Great and his successors. In 1154, Arab geographer Muhammad al-Idrisi created what would be known as the Tabula Rogeriana at the court of King Roger II of Sicily. The book, written in Arabic, is a description of the world and includes a world map. The map is divided into seven climate zones and contains the Eurasian continent in its entirety, but only the northern part of the African continent. It remained the most accurate world map for the next three centuries, and it demonstrated that Africa was only partially known to Christians, Genoese, Venetians, and the Arab seamen. Its southern extent was unknown. Knowledge about the Atlantic African coast was fragmented and derived mainly from old Greek and Roman maps based on Carthaginian knowledge, which included information gathered during the Roman exploration of Mauritania. The Red Sea was barely known, and only trade links with the Maritime republics—the Republic of Venice especially—fostered collection of accurate maritime knowledge. Indian Ocean trade routes were sailed by Arab traders. Between 1405 and 1421, the Yongle Emperor of Ming China sponsored a series of long-range tributary missions. The fleets visited Arabia, East Africa, India, Maritime Southeast Asia, and Thailand. But the journeys, reported by Ma Huan—a Muslim voyager and translator—were halted abruptly after the emperor’s death, when the Chinese Ming Dynasty retreated in the haijin: a policy of isolationism with limited maritime trade. Prelude to the Age of Discovery A series of European expeditions crossing Eurasia by land in the late Middle Ages marked a prelude to the Age of Discovery. Although the Mongols had threatened Europe with pillage and destruction, Mongol states also unified much of Eurasia and, from 1206 on, the Pax Mongolica allowed safe trade routes and communication lines that stretched from the Middle East to China. A series of Europeans took advantage of these and explored eastward. Most were Italians, as trade between Europe and the Middle East was controlled mainly by the Maritime republics. Christian embassies were sent as far as Karakorum, during the Mongol invasions of Syria, from which they gained a greater understanding of the world. The first of these travelers was Giovanni da Pian del Carpine, who journeyed to Mongolia and back from 1241 to 1247. About the same time, Russian prince Yaroslav of Vladimir, and subsequently his sons Alexander Nevsky and Andrey II of Vladimir, traveled to the Mongolian capital. Though having strong political implications, their journeys left no detailed accounts. Other travelers followed, like French André de Longjumeau and Flemish William of Rubruck, who reached China through Central Asia. From 1325 to 1354, a Moroccan scholar from Tangier, Ibn Battuta, journeyed through North Africa, the Sahara Desert, West Africa, Southern Europe, Eastern Europe, the Horn of Africa, the Middle East, and Asia, finally reaching China. In 1439, Niccolò de’ Conti published an account of his travels as a Muslim merchant to India and Southeast Asia. Later, between 1466 – 1472, Russian merchant Afanasy Nikitin of Tver travelled to India. Marco Polo, a Venetian merchant, dictated an account of journeys throughout Asia from 1271 to 1295. His travels are recorded in Book of the Marvels of the World, (also known as The Travels of Marco Polo, c. 1300), a book which did much to introduce Europeans to Central Asia and China. Marco Polo was not the first European to reach China, but he was the first to leave a detailed chronicle of his experience. His book inspired Christopher Columbus and many other travelers. Attributions Title Image - Christian And Moor Playing Chess. Libros de juegosd'Alphonse X le sage fol. 64r.0, c. 1251-83. Attribution: Unknown authorUnknown author, Public domain, via Wikimedia Commons. Provided by: Wikipedia. Location: https://commons.wikimedia.org/wiki/File:ChristianAndMuslimPlayingChess.JPG. License:CC BY-SA: Attribution-ShareAlike. Adapted from: https://courses.lumenlearning.com/boundless-worldhistory/chapter/the-age-of-discovery/
oercommons
2025-03-18T00:35:08.448713
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87882/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87883/overview
Reconquista and Technology Overview Reconquista and Technology The Reconquista refers to the period of, mainly, military efforts by Christians to reconquer land on the Iberian peninsula taken by Muslims by the early eighth century. The Reconquista lasted from the early eighth century to 1492, when the last of Muslim forces were forced from the southern edge of Iberia. It occurred within the context of wars during the Middle Ages between Christian and Muslim forces over control of southwest and south-central Europe. However, this period in Iberian history was marked not only by conflict between Christians and Muslims, but also by peaceful coexistence, including cultural exchanges. Learning Outcome - Identify the dynamics of trade and political power that led to European exploration of the New World. Key Terms / Key Concepts reconquista: Christian reconquest of Iberia from the eighth through the fifteenth century C.E. Arianism: a Christian doctrine concerning the Trinity that came to be seen as unorthodox, even heretic Christopher Columbus - Genoese explorer credited with the discovery of the Americas Prelude to the Muslim Conquest of Iberia: Catholicism’s Triumph over Arianism Before the Muslim invasion of Iberia, Catholicism established its primacy in the peninsula at the conclusion of a brief struggle with Arianism during the sixth century. Although the period of rule by the Visigothic Kingdom (c. 5th – 8th centuries) saw the brief spread of Arianism, Catholic religion coalesced in Spain at the time. The Councils of Toledo debated creed and liturgy in orthodox Catholicism, and the Council of Lerida in 546 constrained the clergy and extended the power of law over them under the blessings of Rome. In 587, the Visigothic king at Toledo—Reccared—converted to Catholicism and launched a movement in Spain to unify the various religious doctrines that existed in the land. This put an end to dissension on the question of Arianism. Subsequently, Catholicism consolidated its control and domination over Iberia with the Reconquista and the Spanish Inquisition that followed. These developments have influenced the evolution of Spain and Iberia to the present. Background to and Beginning of the Reconquista The Reconquista “reconquest”) is a period in the history of the Iberian Peninsula, spanning approximately 770 years. Historians traditionally mark the beginning of the Reconquista with the Battle of Covadonga (most likely in 722), and its end with Columbus’ 1492 expedition. The successful conclusion of the Reconquista is associated with Portuguese and Spanish colonization of the Americas. Between the initial Umayyad conquest of Hispania in the 710s and the fall of the Emirate of Granada, the last Islamic state on the peninsula, to expanding Christian kingdoms in 1492, the Reconquista progressed slowly and unevenly. The Arab Islamic conquest had dominated most of North Africa by 710 CE. In 711 an Islamic Berber raiding party, led by Tariq ibn Ziyad, was sent to Iberia to intervene in a civil war in the Visigothic Kingdom. Tariq’s army crossed the Strait of Gibraltar and won a decisive victory in the summer of 711 when the Visigothic King Roderic was defeated and killed at the Battle of Guadalete. Tariq’s commander, Musa, quickly crossed with Arab reinforcements, and by 718 the Muslims were in control of nearly the whole Iberian Peninsula. West Germanic Franks stopped the Muslim advance into western Europe at the 732 Battle of Tours. In the summer of 722, a decisive victory for the Christians took place at Covadonga, in the north of the Iberian Peninsula. In a minor battle known as the Battle of Covadonga, a Muslim force was sent to put down the Christian rebels in the northern mountains, but it was defeated by Pelagius of Asturias, who established the monarchy of the Christian Kingdom of Asturias. In 739, a rebellion in Galicia, assisted by the Asturians, drove out Muslim forces; Galicia then joined the Asturian kingdom. The Kingdom of Asturias became the main base for Christian resistance to Islamic rule in the Iberian Peninsula for several centuries. Warfare between Muslims and Christians Medieval Spain was the scene of almost constant warfare between Muslims and Christians. Muslim interest in the peninsula returned in force around when Al-Mansur sacked Barcelona in 985. Under his son, other Christian cities were subjected to numerous raids. After his son’s death, the caliphate plunged into a civil war and splintered into the so-called “Taifa Kingdoms.” The Almohads, who had taken control of the Almoravids’ Maghribi and al-Andalus territories by 1147, surpassed the Almoravides in fundamentalist Islamic outlook, and they treated the non-believer dhimmis harshly. Faced with the choice of death, conversion, or emigration, many Jews and Christians left. The Taifa kingdoms lost ground to the Christian realms in the north. After the loss of Toledo in 1085, the Muslim rulers reluctantly invited the Almoravides into the conflict with Christian forces, who invaded Al-Andalus from North Africa and established an empire. In the 12th century the Almoravid empire broke up again, only to be taken over by the invasion of the Almohads, who were defeated by an alliance of the Christian kingdoms in the decisive battle of Las Navas de Tolosa in 1212. By 1250, nearly all of Iberia was back under Christian rule, with the exception of the Muslim kingdom of Granada, the last state in Iberia to be taken back from Muslim forces. The reconquest of Granada marked the end of the Reconquista. Despite the Reconquista, the Muslim presence in Iberia left lasting legacies in technology, architecture, the arts, and literature. Spanish Inquisition The most prominent legacy of the Reconquista was the Spanish Inquisition. Around 1480, Ferdinand II of Aragon and Isabella I of Castile, known as the Catholic Monarchs, established this campaign of religious expulsion that targeted Muslims and Jews. It was intended to maintain Catholic orthodoxy in their kingdoms and to replace the Medieval Inquisition, which was under Papal control. It covered Spain and all the Spanish colonies and territories, which included the Canary Islands, the Spanish Netherlands, the Kingdom of Naples, and all Spanish possessions in the Americas. People who converted to Catholicism were not subject to expulsion, but between 1480 and 1492 hundreds of those who had converted (conversos and moriscos) were accused of secretly practicing their original religion (crypto-Judaism or crypto-Islam); they were arrested, imprisoned, interrogated under torture, and in some cases burned to death, in both Castile and Aragon. In 1492 Ferdinand and Isabella ordered segregation of communities to create closed quarters that became what were later called “ghettos.” They also furthered economic pressures upon Jews and other non-Christians by increasing taxes and social restrictions. In 1492 the monarchs issued a decree of expulsion of Jews, known formally as the Alhambra Decree, which gave Jews in Spain four months to either convert to Catholicism or leave Spain. Tens of thousands of Jews emigrated to other lands such as Portugal, North Africa, the Low Countries, Italy, and the Ottoman Empire. Later in 1492, Ferdinand issued a letter addressed to the Jews who had left Castile and Aragon, inviting them back to Spain if they had become Christians. The Inquisition was not definitively abolished until 1834, during the reign of Isabella II, after a period of declining influence in the preceding century. Most of the descendants of the Muslims who submitted to Christian conversion—the Moriscos—were later expelled from Spain after serious social upheaval, when the Inquisition was at its height. The expulsions were carried out more severely in eastern Spain (Valencia and Aragon) due to local animosity towards Muslims and Moriscos perceived as economic rivals; local workers saw them as cheap labor undermining their bargaining position with the landlords. Those that the Spanish Inquisition found to be secretly practicing Islam or Judaism were executed, imprisoned, or expelled. Nevertheless, all those deemed to be “New Christians” were perpetually suspected of various crimes against the Spanish state, including continued practice of Islam or Judaism. Attributions Licenses and Attributions CC LICENSED CONTENT, SHARED PREVIOUSLY - Curation and Revision. Provided by: Boundless.com. License: CC BY-SA: Attribution-ShareAlike CC LICENSED CONTENT, SPECIFIC ATTRIBUTION - Title Image - "Moorish army (right) of Almanzor during the Reconquista Battle of San Esteban de Gormaz, from Cantigas de Alfonso X el Sabio". Attribution: Unknown author, Public domain, via Wikimedia Commons. Provided by: Wikipedia. Location: https://commons.wikimedia.org/wiki/File:Cantigas_battle.jpg. License: CC BY-SA: Attribution-ShareAlike - Alhambra Decree. Provided by: Wikipedia. Location: https://en.wikipedia.org/wiki/Alhambra_Decree. License: CC BY-SA: Attribution-ShareAlike - Battle of Covadonga. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Reconquista. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Visigothic Kingdom. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Kingdom of Asturias. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Spanish Inquisition. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Arianism. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - History of Spain. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Catholic Monarchs. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Francisco_de_Goya_-_Escena_de_Inquisiciu00f3n_-_Google_Art_Project.jpg. Provided by: Wikipedia. License: Public Domain: No Known Copyright - La_Rendiciu00f3n_de_Granada_-_Pradilla.jpg. Provided by: Wikipedia. License: Public Domain: No Known Copyright - Spanish Golden Age. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Habsburg Spain. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Morisco. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - El Escorial. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike
oercommons
2025-03-18T00:35:08.472943
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87883/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87885/overview
Portuguese Colonization Overview Portuguese History and Culture Portuguese colonization rested up to two central themes: technology and training for the Portuguese sailors. The technologies that came from the Islamic World, such as the compass, astrolabe, rudders, and sails were all instrumental in helping the Portuguese to become the first Europeans to explore the world. By incorporating these tools, the Portuguese leader Prince Henry the Navigator, established schools and trainings for these sailors that helped them correctly position themselves and their newly discovered lands. Prince Henry the Navigator is one of the most important Portuguese leaders because of his leadership and establishment of sailing practices, which resulted in the Portuguese having a different mentality for colonization. The Portuguese practiced a unique version of colonization, where instead of directly establishing control and long-term holdings, the Portuguese established very limited government, leaving the Catholic Church to play a pivotal role in the colonies; this is because the Portuguese goal was exploring and learning new areas. This developed the “Mapped it, I own it” strategy, which was based on merely being able to identify the area first. The strategy had many serious faults and issues, but it provided the Portuguese with a clear methodology of conquest of Brazil and parts of Oceania. Learning Objectives - Evaluate the Portuguese strategy for colonization. - Analyze the impact of the Portuguese on the development of European traders. - Compare and contrast Portuguese colonization between the Atlantic and Indian Ocean worlds. - Evaluate the importance of Brazil in the Portuguese system. Key Terms / Key Concepts Vasco da Gama: a Portuguese explorer and one of the most famous and celebrated explorers from the Age of Discovery; the first European to reach India by sea. Introduction Portuguese sailors were at the vanguard of European overseas exploration, discovering and mapping the coasts of Africa, Asia, and Brazil. As early as 1317, King Denis made an agreement with Genoese merchant sailor Manuel Pessanha (Pesagno), appointing him first Admiral with trade privileges with his homeland, in return for twenty war ships and crews; this was done with the goal of defending the country against Muslim pirate raids. But the agreement also created the basis for the Portuguese Navy and the establishment of a Genoese merchant community in Portugal. In the second half of the 14th century, outbreaks of bubonic plague led to severe depopulation; the economy was extremely localized in a few towns, unemployment rose; and migration led to agricultural land abandonment. Only the sea offered alternatives, with most people settling on fishing and trading in coastal areas. Between 1325 – 1357, Afonso IV of Portugal granted public funding to raise a proper commercial fleet and ordered the first maritime explorations under command of admiral Pessanha, with the help of Genoese. In 1341, the Canary Islands, already known to Genoese, were officially explored under the patronage of the Portuguese king, but in 1344, the Spanish kingdom of Castile disputed them, further propelling the Portuguese navy efforts. Technology The Islamic conquest of the Iberian Peninsula in 711 CE brought many useful tools to the Portuguese. The incorporation of Islamic culture and arts into Portugal meant that the Portuguese integrated technologies and cultures of Islam. The Islamic technologies of astrolabe, compass, rudders, and sails proved quite beneficial to Portugal. The use of the compass and astrolabe also meant that the Portuguese had another piece of technology that allowed them to sail further into the open ocean without the fear of being lost or falling off the other side of the world. In the 21st century it is common to use GPS to find one’s way, but in the 14th and 15th centuries the best tools for navigation were the astrolabe and the compass, which provided only approximate locations for explorers. Still, the astrolabe and the compass were advanced for their time and provided the ability to move throughout the open ocean and to locations that had not been reached before. The rudder and sail were also key additions to Portuguese ship building. The rudder meant that turning in the open ocean was easier and helped to provide key movement to ships to help them move in low winds. Sails were not only triangles but also curved to attempt to catch as much wind as they could. The combination of the rudder and sail together meant that the Portuguese ships were able to make the most out of limited wind conditions, in areas such as the horse latitudes near the equator. The ability to move in low winds and tides, coupled with the compass and astrolabe meant that the Portuguese had a clear advantage over their other European counterparts who did not have access to these technologies. Assumption of these Islamic technologies proved an important shift for both the Portuguese and the Spanish, because they gave these kingdoms an advantage over their European counterparts. Many of the other European states of the period had large square masts that did not rotate or move, and they also did not have rudders. Combined, this made it difficult to move in open ocean water. Prince Henry the Navigator recognized the importance of these inventions early on and integrated them into his training school for explorers. Henry realized future explorers and navigators would need to be trained on how to use these tools. And the development of this kind of schooling enabled Portugal’s successes in discovering new paths and traveling to more regions. Through the training developed by Prince Henry the Navigator, the Portuguese travelers devloped a very unique mentality when exploring. The Portuguese developed a “I mapped it, I own it” type of stance towards exploration. This meant that for the Portuguese explorers, they were experts at finding locations, and because of that, the Portuguese thought that these new locations were proprietary information. This meant that the Portuguese were not as active in defense or negative treatment of indigenous populations because they felt that other European states were not entitled to find these areas. This strategy would prove problematic for the Portuguese, especially as other European states started expanding quickly throughout the Atlantic, Indian, and Pacific Ocean Worlds, where the Portuguese were successful early in their exploration. Vasco da Gama The long-standing Portuguese goal of finding a sea route to Asia was finally achieved in a ground-breaking voyage commanded by Vasco da Gama. His squadron left Portugal in 1497, rounded the Cape and continued along the coast of East Africa, where a local pilot was brought on board who guided them across the Indian Ocean. De Gama reached Calicut in western India in May 1498. Reaching the legendary Indian spice routes unopposed helped the Portuguese improve their economy that, until Gama, was mainly based on trades along Northern and coastal West Africa. These spices were at first mostly pepper and cinnamon, but soon included other products, all new to Europe. The second voyage to India was dispatched in 1500 under Pedro Álvares Cabral. While following the same south-westerly route as Gama across the Atlantic Ocean, Cabral made landfall on the Brazilian coast. This was probably an accident, but it has been speculated that the Portuguese knew of Brazil’s existence prior to this incident. Cabral recommended to the Portuguese king that the land be settled, and two follow-up voyages were sent in 1501 and 1503. The land was found to be abundant in pau-brasil, or brazilwood, from which it later inherited its name. But the failure to find gold or silver meant that for the time being Portuguese efforts remained concentrated on India. As the Portuguese were exploring the Indian and Pacific Ocean Worlds, they came into contact with new populations and also were increasingly interested in getting new goods from the regions. The Portuguese traders followed the Spice Routes in the Indian Ocean and soon came to acquire many of the spices, such as sugar, from the tropical areas of the Pacific Ocean. Sugar was a very important product for the 15th century, because it added flavor to foods and in Europe it was very expensive. The Portuguese getting access to sugar cane meant that they could plant their own sugar in similar tropical climates. This development resulted in Portugal seeking tropical areas that were much closer to Europe, in order to establish sugar production. One of the best locations for the Portuguese to establish sugar production was on the islands of the Canary and Azores, near northeastern Africa. This was ideal because of the tropical climate, closeness to Europe, and ease of access to ports. The Portuguese began establishing sugar production sites, and they quickly discovered that to ensure the maximum amount of money from planting they would have to resort to a different type of farming. Portuguese farmers began to plant in large plantations, which required large amounts of labor to produce products. These farms were modeled on the Roman system of haciendas and estates. The development of the plantation system meant that the Portuguese needed lots of agricultural workers to develop sugar. This was where the Portuguese had a significant problem: they did not know from where this labor was to emerge. There was a small indigenous population on the islands of the Canaries and Azores, and these populations refused to work for the Portuguese. The indigenous population’s refusal meant that the Portuguese had to look elsewhere for plantation labor. There was a small Jewish population in Portugal that were first offered to migrate to the island as a way to practice their cultural and religious beliefs. This method was unsuccessful because the Portuguese were too demanding of the workers; many of the Jewish peoples on the islands either failed to meet their contracted goals or they simply refused to continue to work. This meant that the Portuguese estate owners looked to another source of labor: African slaves. The Transcontinental Slave Trade started as early as the Portuguese’s first colonial settlement off the coast of Africa. The need for slaves resulted in the Portuguese contact with Central African tribes, such as the Kingdom of the Kongo. The relationships that the Portuguese developed with the Kongoese were central in the overall development of the colonies at the Canaries and Azores. Trading weapons, finished goods, and some gold for slaves in Africa provided a key source of labor for the Portuguese. By getting African slaves, the Portuguese were then able to develop their colonial model of plantation style economics and sugar production. This abhorrent system entered the European mentality with the Portuguese and would become central to how other Europeans developed their colonial economic relationships. Brazil As the Portuguese continued to explore throughout the 15th into the early 16th centuries, their colonial footprint was growing throughout the world. Many of the Portuguese explorers knew the best routes throughout the Atlantic Ocean into the Pacific. Through a coincidence of accidents and luck, the Portuguese discovered Brazil and developed a key colony that would become a major resource center. Key Terms / Key Concepts brazilwood: a genus of flowering plants in the legume family, Fabaceae (This plant has a dense, orange-red heartwood that takes a high shine, and it is the premier wood used to make bows for stringed instruments. The wood also yields a red dye called brazilin, which oxidizes to brazilein. Starting in the 16th centuries, this tree became highly valued in Europe and quite difficult to get.) engenhos: a colonial-era Portuguese term for a sugar cane mill and the associated facilities Captaincies system: the colonial government of the Portuguese in Brazil that was heriditary. There were ten captancies in colonial Brazil. Pedro Álvares Cabral was a Portuguese explorer and military commander that sailed in the late 15th and early 16th centuries. Cabral set sail in early 1500, following a route to India, like his predecessor Vasco da Gama. Cabral and his men found themselves in an unusual predicament. They passed the equator and thought that they were sailing westward as far as they could from Africa. This route proved to be fateful because, in less than one month after their leaving port, what they found instead was a new land on the northeastern coast of Brazil. This new land was christened as Monte Pascoal, meaning Easter Mount, because it was found during Easter week. Cabral and his men soon found that this land was unique and began exploring the territory. They discovered that the Northeastern corner of Brazil was tropic and an excellent location for developing sugar production. They also found that the local trees could be harvested and used to produce a deep red dye. This tree is a Brazil tree, and it is what the region is named after. brazilwood trade was key to the Portuguese early colonies and would be a central reason for the further exploration of Brazil. The early exploration of Brazil also established the development of a colonial government. The Brazilian government was known as a captaincy; these were developed in the early 16th century when the Portuguese monarchy used land grants with governing privileges as a way to colonize new lands. This system provided a unique way of providing a government system. There were ten different captaincies in Brazil that divided the northeastern coastline of Brazil. These captaincies had limited government oversight from the Portuguese government and the colonists were subject to few specific rules while in Brazil. The majority of the early Brazilian settlement was located along the line of the northeastern coast. The Amazon River was dangerous for many of the Portuguese settlers because of the diseases that the mosquitos spread and the difficulties travel created. One of the few groups that became central to Brazilian exploration was the Jesuit priests, who traveled to the Amazon looking for indigenous people to convert. The Jesuits were a major force in Brazilian colonization. The Jesuits were a branch of the Catholic Church, who was very interested in spreading their faith. This becomes one of the key methods of the Portuguese colonization is the way that Christianity plays a part in the Brazilian colonization. In 1530, an expedition led by Martim Afonso de Sousa arrived in Brazil to patrol the entire coast, ban the French, and create the first colonial villages on the coast, like São Vicente. The Portuguese crown devised a system to effectively occupy Brazil without paying the costs. Through the hereditary Captaincies system, Brazil was divided into strips of land that were donated to Portuguese noblemen, who were in turn responsible for the occupation and administration of the land, while answering to the king. The system was a failure with only four lots successfully occupied: Pernambuco, São Vicente (later called São Paulo), Captaincy of Ilhéus, and Captaincy of Porto Seguro. The captaincies gradually reverted to the Crown and became provinces and eventually states of the country. Starting in the 16th century, sugarcane grown on plantations along the northeast coast —called engenhos—became the base of Brazilian economy and society; these relied on slaves to make sugar for export to Europe. At first, settlers tried to enslave the natives as labor to work the fields. However, colonists were unable to sustainably enslave Natives, and Portuguese landowners soon imported millions of slaves from Africa. Mortality rates for slaves in sugar and gold enterprises were very high, and there were often not enough females or proper conditions to replenish the slave population. Still, Africans became a substantial section of Brazilian population; and, long before the end of slavery in 1888, they began to merge with the European Brazilian population through interracial marriage. The indigenous people of Brazil were in a very unique position. The Brazilian rainforest was dense and provided limited means of agricultural production. This meant that there were few large tribes, instead, most of Brazil’s indigenous populations were smaller and had limited resources. When the Portuguese arrived, they had no clear laws about protections of the indigenous peoples. These two factors meant that the indigenous people were in a unique position in the Portuguese system. The settlers wanted to use the indigenous people as labor, but when the Portuguese came to make them laborers the indigenous people could escape and live outside of the Portuguese system, where Portugal could not find them. The Gold Rush The discovery of gold in Brazil was met with great enthusiasm by Portugal, which had an economy in disarray following years of wars against Spain and the Netherlands. A gold rush quickly ensued, with people from other parts of the colony and Portugal flooding the region in the first half of the 18th century. The large portion of the Brazilian inland where gold was extracted became known as the Minas Gerais (General Mines). Gold mining in this area became the main economic activity of colonial Brazil during the 18th century. In Portugal, the gold was mainly used to pay for industrialized goods (textiles, weapons) obtained from countries like England, as well as to build magnificent monuments like the Convent of Mafra, especially during the reign of King John V. The discovery of gold in the area caused a huge influx of European immigrants and the government decided to bring in bureaucrats from Portugal to control operations. They set up numerous bureaucracies, often with conflicting duties and jurisdictions. The officials generally proved unequal to the task of controlling this highly lucrative industry. In 1830, the Saint John d’El Rey Mining Company, controlled by the British, opened the largest gold mine in Latin America. The British brought in modern management techniques and engineering expertise. Located in Nova Lima, the mine produced ore for 125 years. Gold production declined towards the end of the 18th century, beginning a period of relative stagnation of the Brazilian hinterland. The Portuguese system of colonization followed key ideas of using technology to find regions, and the school of Prince Henry the Navigator provided a key resource for the Portuguese establishment of a “I mapped it, I own it” mentality. The Portuguese need for labor in their newly founded colonies meant that they started the establishment of the slave system that other countries would further use to almost the end of the 19th century. Brazil was the biggest colonial establishment of Portugal. The Portuguese attempted to first use the indigenous population as a source of labor. This did not work, because of the harsh conditions, the indigenous populations quickly disappeared into the woods and were not able to be found by settlers. This meant that the Portuguese had a very similar problem in Brazil that they had in the Canaries and Azores island: how to get laborers for the sugar plantations. The solution that the Portuguese arrived at was the introduction of African slaves in 1530 CE. In 1550 CE, sugar was introduced to Brazil. Sugar revolutionized the Brazilian economy. Brazil was almost a perfect site to produce sugar, with significant rainfall, tropical heat, and fertile soils in the northeastern corner of Brazil. The production of sugar skyrocket and so too did the need for labor. This meant that the Brazilian economy soared and sugar was a central tenant of the Portuguese integration of the Brazilians. While sugar was the big economic product, so too was gold that was found further south in Mateo Grosso. Throughout the 15th into the 17th centuries the centers of the Brazilian economy was gold, sugar, and coffee. Attributions Attributions Images courtesy of Wikimedia Commons: https://upload.wikimedia.org/wikipedia/commons/8/81/Capitanias.jpg Boundless World History https://www.coursehero.com/study-guides/boundless-worldhistory/the-age-of-discovery/ Work based around the ideas of Patricia Seed: Ceremonies of Possession in Europe's Conquest of the New World, 1492–1640
oercommons
2025-03-18T00:35:08.498674
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87885/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87886/overview
Spanish Colonization Overview Spanish Colonization Portugal and Spain, the first two nations to explore the Atlantic Ocean, the Americas, and Africa, took different approaches. The Portuguese, the first of the two, focused on Africa and Asia, constructing trading settlements, while the Spanish established settlements oriented toward the exploitation of natural resources in the Americas and the Caribbean Sea. After Christopher Columbus set sail in 1492 CE and “discovered” the New World for the Spanish, a wave of many other Spaniards sailed westward in attempts to find their own riches and lands. Individuals like Cortes and Pizzaro blazed the trail that other Spaniards would use to build the Spanish colonial system. The Spanish found the Aztec and the Incan civilizations and tried to integrate the indigenous populations into their colonial settlements. Furthermore, the central role of the Catholic Church and the economic model employed were key differences between the Spanish and the Portuguese. The separate continents and oceans that the Portuguese and the Spanish explored, settled, and exploited mitigated any conflict between them. Learning Objectives - Evaluate the differences between the Spanish, Portuguese, English, Dutch, and French colonization. - Analyze how the Spanish colonization was different between the center and periphery regions. - Evaluate the impact of Potosi on global economics. - Analyze the differences in how the Spanish integrated different groups into their colonial world. Key Terms / Key Concepts Treaty of Tordesillas: a 1494 treaty that divided the newly discovered lands outside Europe between Portugal and the Crown of Castile, along a meridian 370 leagues west of the Cape Verde islands, off the west coast of Africa (This line of demarcation was about halfway between the Cape Verde islands, which was already Portuguese, and the islands entered by Christopher Columbus on his first voyage, which he claimed for Castile and León.) Christopher Columbus: an Italian explorer, navigator, and colonizer who completed four voyages across the Atlantic Ocean under the monarchy of Spain, which led to general European awareness of the American continents Bartolomé de las Casas: sixteenth-century Spanish historian, social reformer, and Dominican friar, who arrived as one of the first European settlers in the Americas and participated in the atrocities committed against the Native Americans by the Spanish colonists (In 1515, de las Casas reformed his views and advocated before King Charles V, Holy Roman Emperor, on behalf of rights for the natives.) Mita: a form of labor tax that required one person from each family to work in the mines, which was enforced by the Spanish once they gained control of the region Spanish The colonial activity in the Iberian Peninsula meant that the two major states of Portugal and Spain were deep rivals. The proximity of the two states meant that they were natural rivals. When the Spanish started to explore, the Portuguese began to push back, and tensions rose between these two. In the 15th century, one of the only ways to resolve international tensions was to turn to the Pope to solve these types of conflicts. In the Middle Ages, the Pope had more political power than kings, due to the fact that the Pope could choose who could become the king. As the Spanish and Portuguese tensions rose the Pope became involved. The Pope in 1492 was Pope Alexander VI helped to formulate the treaty between the Spanish and the Portuguese known as the Treaty of Tordesillas. TheTreaty of Tordesillas was a treaty that divided the newly discovered lands outside Europe along the meridian 370 leagues west of the Cape Verde islands, off the west coast of Africa. The treaty gave all territory outside of Europe to the East to the Portuguese, while the Spanish got everything to the West. This treaty was seen at the time as a completely fair and equal treaty between both the Spanish and the Portuguese. However, there were several underlying problems with this treaty. First, the Pope did not ask other peoples of the world, such as Africa, Asia, and Latin America, if they were okay with being owned by either the Spanish or Portuguese. The second problem was that this divided the world between the two European powers, but other groups, such as the French, Dutch, and English, were left out of colonization. But it did leave the Spanish with many new territories to expand and explore in the North and South American worlds. Columbus was the first to sail for the Spanish and he helped to create several of the ways that the Spanish lived with the indigenous people. The island of Hispaniola had many indigenous groups, such as the Arawak. The Arawak were friendly to the Spanish and helped to establish the colonies. The Spanish, on the other hand, treated the Arawak very badly. The Spanish friar and historian Bartolomé de las Casas wrote about the treatment of the Arawak, which included enslavement, starvation, and even crucifixion. This shocking and horrible treatment of indigenous people was at odds with the laws of Spain. When Columbus left the Americas after his first voyage, he brought an indigenous ambassador to meet with Isabel and Ferdinand, the king and queen of Spain. Queen Isabel found the indigenous people very interesting and said that it was illegal to enslave the indigenous people because they had “souls.” However, the colonists, needing labor and looking down on the indigenous people, would continue a long history of mistreatment of indigenous populations. Historians question the role of Christopher Columbus in establishing rules for the Spanish and whether or not he wanted the mistreatment of indigenous peoples or if he was simply acting from human greed. Either way, the lawlessness of the Spanish towards the indigenous people would become a key feature of the Spanish colonization. This becomes one of the biggest differences between the Spanish and the Portuguese. The Spanish developed a system of mistreatment and brutality, building their colonial empire on brutality and conquest; whereas, the Portuguese built their colonial model on a “I mapped it, I owned it” mentality. Center vs Surrounding Regions Another big difference between the Spanish and almost all of the other colonial states in the 15th and 16th centuries was that the Spanish found indigenous empires. This meant that they had a big advantage by taking over both the Aztec and Inca empires. These empires gave the Spanish colonial world a source of wealth and materials that they would continue to expand upon throughout the 15th to 18th centuries. The Spanish colonial system had many problems, but despite these, had a firm basis of power in Latin America. Many Spaniards heard dreams and tales of the conquest and wanted to take as much as they could of the New World; this led to a wash of many Spaniards, each looking for another empire to conquer. While this was a dream of many conquistadors, the problem was that there were few imperial centers to conquer. Most of the conquistadors traveled throughout the Americas, searching for gold and riches, only to leave empty handed. Looking for treasures meant that they were not interested in establishing long term holdings or staying in the regions, and that created a unique opportunity for the Catholic Church to establish a center in these regions. This led to a periphery area, one that was outside of the colonial imperial centers and would have a unique role in Spanish colonization. The center of power in these periphery areas was the Catholic Church, and the areas had limited relationships with the established centers of power, which meant lack of guidance from the crown. After Christopher Columbus landed in the Americas, the Spanish quickly established the Caribbean as a major area of colonization. The island of Hispaniola, in particular, was the center of the Spanish for exploration and conquest. Many of the conquistadors were eager to go and explore the Americas because they were lower class individuals who dreamt of having riches and treasures. The Spanish crown gave political rights to the governor of Hispaniola and Cuba specifically to allow conquistadors to travel. This meant that ventures had to be approved by the Hispaniola and Cuban governors. Conquest of Mexico The model of conquest that the Spanish followed was to move into a society, quickly remove the head of the government, and destroy the native religion. Then replace the local government and religion with that of the Spanish and Catholic church. This model was key for the Spanish in the conquest of the Aztec and Inca, and it would be the goal of many Spaniards following the conquest. Learning Objectives - Analyze how the Spanish conquered the Aztec Empire. - Evaluate how the Spanish established a colony in the Aztec population. Conquest of the Aztec Empire One such conquistador that wanted to test their fortunes in the unexplored Americas was Hernán Cortés. Cortés was born to a lesser nobility family and saw exploring the Americas as his way to earn fame and fortune. He first settled in Hispaniola and found that he was not happy with the lands there before moving to Cuba. In Cuba, he earned a small plot of land and laborers. He also worked closely with the Spanish governor and became part of the colonial administration, helping to conquer the island. However, he gave up this life when he heard stories of riches elsewhere and dreamed of gaining them. The Spanish were telling tales of cities of gold, riches beyond their wildest dreams, and lands that were almost infinite. Cortés wanted to leave Cuba and gain those riches. The problem was the governor of Cuba heard about Cortés’s ambitions and the relationship between the two men became difficult. The governor had heard tales of how Cortés gaining followers and decided to revoke Cortés’s approval of exploration. Cortés, hearing that the expedition that he planned was declared illegal, decided to take the band of men that followed him and leave before the Spanish governor could arrest him. The expedition that Cortés first made to Mexico was technically illegal and was against the Spanish crown’s own wishes. But that was not the only problem that Cortés faced at that time; he was also headed into the lands of Central America, where there were existing powerful empires of indigenous people. The Aztec were known throughout Central America as a warrior tribe that had vast riches in the capital city. The Aztecs built their empire on trade and conquest. The center of the empire was the capital city of Tenochtitlan, a city that was built on the ancient lake and had many great resources that would prove difficult for the Spanish to overtake. The Aztec emperor Montezuma II was a good emperor who expanded trade, extended the empire throughout the central valley of Mexico, and made sure that the general peace and prosperity of the Aztec empire grew during the late 15th and early 16th centuries. Cortés arrived in Central America at the Yucatan Peninsula in 1519. Cortés and his men left the Yucatan Peninsula and moved further north, to what is today Veracruz in Mexico, before deciding where to land and untimately “discover” the Aztecs. Many of Cortés’s men had heard about the large number of people in the Aztec empire, and they knew that they would be outnumbered. To prevent mutiny of his own men, Cortés had all of his ships burned but one; this was meant to send a message to his men that they were not returning back to Cuba. Cortés was determined to defeat any opposition. Marching with few men and limited supplies towards Tenochtitlan, he wanted a chance to meet with Montezuma—the leader of the Aztecs. As he marched forward, Cortés met with and found alliances with other indigenous populations. These alliances were important because many of the indigenous people were not friendly with the Aztec and would become key alliances of the Spanish during the attack on Tenochtitlan. In Tenochtitlan, Montezuma held a different feeling. After hearing about strangers from the east landing and looking for gold, Montezuma thought that this was an angry god that needed to be appeased. Montezuma began sending messengers with money to Cortés and his men, with messages that this was tribute to the god. Cortés, on the other hand, looking for gold, received these tribute packages and realized that there was actually much money to be made in Tenochtitlan. When Cortés did not turn around, Montezuma became worried that maybe that amount of gold was not enough. Montezuma began sending more gold and riches, as a way to appease Cortés. This only further encouraged Cortéd and his men to march toward the city. When Cortés arrived in Tenochtitlan, he was greeted by Montezuma. After months of traveling and gaining indigenous alliances, Cortés had built an army of indigenous people that supported his overthrow of the Aztec. Montezuma peacefully received Cortés and his massive army and treated them well. Many historians believe that Montezuma thought that Cortés was a representative of the Aztec god Quetzalcoatl. But the situation changed when Cortés heard of an Aztec attack on his Spanish men near the coast of Veracruz. Meanwhile in Cuba, the Spanish governor sent other Spaniards to defeat Cortés, who left Tenochtitlan to stop the Spanish attack in Veracruz. Cortés was successful against the Cuban governor’s men, and he banded them together with his own forces. Back in Tenochtitlan, the situation changed quickly. Cortés left Pedro Alvalrado as one of a few leaders of the Spanish in Tenochtitlan. Montezuma asked Alvarado for permission to celebrate the Feast of Toxcatl on May 22, 1520. This was a festival during which the Aztecs celebrated a popular god by sacrificing humans. While Alvalrado at first approved the celebration, once he realized that there would be human sacrifice, he attempted to stop it. When the Spanish went to the Aztec temple and attempted to stop the event, the Aztec pushed back, upset that the Spanish were intervening. A fight ensued, known as the Massacre in the Great Temple. This was not good for the Spanish conquistadors, who were vastly outnumbered in the city of Tenochtitlan and saw that the Aztec population began to turn on them. These tensions were not helped by the plagues that the Aztec suffered during this time. The Aztecs became very sick with European diseases, such as smallpox, measles, mumps, and flu. This meant that the city of close to one million people had a rampant plague attacking the population. The Aztec upper class became very upset with Montezuma because he was engaging with the Spanish. Montezuma was killed on July 1, 1520, but the history is unclear who killed Montezuma. The Spanish report that it was the Aztec that killed Montezuma because of his betrayal. The Aztec claim that it was the Spanish that killed Montezuma for fear of another attack. The death of Montezuma and the attack at the Feast of Toxcatl were two events that meant that Cortés had to return quickly to Tenochtitlan. The death of Montezuma put the city of Tenochtitlan on edge and the people were upset at the Spanish. On the night of June 30th – July 1, 1520, the Spanish were barely able to escape from Tenochtitlan; this became known as Noche Triste. Cortés ordered his men to retreat to the nearby city of Tlaxcala. Much of the treasure looted by Cortés and his men was lost during their escape. The city of Tenochtitlan became an epicenter of disease, and over the next few months the city’s population fell drastically ill. The population suffered greatly, and the defenses of the city were weakened. Cortés, on the other hand, began to put together an army to attack the city. Cortes was a master at finding weaknesses in the Aztec empire. One of the key problems that the Aztec had in the building of their empire was they fought many other indigenous groups in the region surrounding Tenochtitlan. Cortes brought together these groups between July to August to practice sieging and taking down the capital. By August, Cortes marched on Tenochtitlan. The yearlong attacks on the city worked, and on August 13, 1521, Cortés and the Spanish captured the Aztec Empire and claimed it for Spain. Cortés, after almost three years of fighting and conquest, was the sole leader of the largest empire in the Americas. Cortés’s new position as leader of a large New World empire was problematic. When he left for Mexico in 1518, he did so illegally. The entire conquest of Mexico was not sanctioned by the Spanish crown. Cortés, wanting to ensure that he had the support of Charles V, began writing letters of heavy apologies. Cortés, also began sending larger than the required amounts of gold, to ensure that Charles would accept his apologies. Charles V in return granted Cortés the governorship of Mexico. Cortés began creating the government of New Spain, one of the two centers of government in the Americas. The establishment of New Spain meant that the Spanish military was centered in the newly named Mexico City, as were the royal courts and justice buildings, which means it also became the center of Spanish bureaucracy. This meant that the Aztec population became subject to the Spanish laws and customs. It is important to note, that most of the actions taken by the Spanish were meant to remove the indigenous ways of living and replace those with the Spanish culture. For example, many of the Aztec priests were killed and those in training, that were young enough, were sent to Catholic schools for training in Christianity. The Spanish killed all of the upper class and removed their positions of power so that the people would stop paying tribute to the Aztec upper class and instead pay that tribute to their new Spanish. The Spanish did not want to revise many of the methods that made the Aztecs successful. Instead, the Spanish integrated many of the Aztec’s ways of government and society into the newly forming colonial culture. Some of the key differences between the Spanish and Aztec government was that the Spanish used a system of labor and tribute known as the encomienda. This was a system of rewarding Spaniards who were loyal to the conquest by giving them lands in the New World. The size of the lands that were granted were meant to be proportional to the risk each grantee undertook during the campaigns. This helped to inspire and get many lower-class Spaniards to go fight in the New World. The goal of landowning for the Spanish was not just to have land for the sake of owning more land but to produce goods. This meant that the Spanish wanted to turn many of these new territories into vast farms. But that came with another problem. The indigenous populations were forced to work for the owner of the lands that they lived on; they were not paid for their work, nor were they able to complain that this system was unfair. If the indigenous people felt that they could not live or work under the conditions of the encomienda, they were able to leave and move to a different plot of land. Unfortunately, all the areas of Mexico were given as encomiendas to loyal Spaniards. This meant that the indigenous population was forced to work for a Spaniard no manor where they went and were never able to escape Spanish control. The conquest of Mexico demonstrates one of the two ways that the Spanish conquered a center. Cortés’s followers became rich because of the encomienda system. News spread in Spain of the wealth and power in the New World. This helped to fuel a new generation of explorers who would travel to the Americas, searching for their own riches. The conquistadors had filled their heads full of tales of riches and exotic lands, and the prize for any of their followers was vast tracks of land that could make them wealthy. This method allowed the Spanish to more easily take over an established empire and turn it into a Spanish territory. Conquest of the Inca Learning Objectives - Evaluate the differences in the Spanish colonization between the Aztec and the Incan populations. - Analyze the Incan population's impact on the colonial society. Conquest of the Incan Empire With the Conquest of Peru, the second center that the Spanish created was in the Andes. The Incan empire was built from a trade federation that spanned the majority of South America. The Inca empire was the largest of the civilizations in the Americas before Columbus. Formed in the Peruvian highlands in the 13th century, the Inca spread southward throughout the Andes by the 15th century. One of the keys to the Incan success was their use of tools to create central roads, terrace farming, and federation of labor and tribute from local tribes. The federation saw great successes throughout the South American continent through trade, and there was very little conflict about political leadership. The son of the Inca ruler was usually the leader of the army, this gave the leadership key understanding and insights to how the military worked. The Incan leadership remained stable throughout the 13th to 15th centuries. In 1524, the Inca leader died of a high fever, probably due to the diseases that were appearing in South America. His death was a very big problem because he had two sons that would begin to fight for the throne of the Inca. For five years, the two brothers ruled peacefully, Atahualpa in the north and Huascar in the south. But Huascar wanted to have power in the Incan capital of Cuzco. He marched to Cuzco and arrested Atahualpa. This started a great fight between the Incan nobles, as there were some who supported Huascar as the legitimate leader of the Inca and others who supported Atahualpa. After a very bloody civil war, Atahualpa was victorious. Even though Atahualpa won, it did not mean that the Inca were not hurt, the deep division would be a key reason why the Spaniard Pizarro would be victorious. Francisco Pizarro was a unique conquistador. He was born in Spain in 1478 CE to pig farmers. Being poor, Pizarro never learned to read or write. He left for the New World, in search of fortune and fame, in 1509 CE. Pizarro made a name for himself by accompanying Balbo as he crossed the Isthmus of Panama in 1513 CE, when he became one of the early Europeans to see the Pacific Ocean. But when there was division between Balboa and other conquistadors, Pizarro arrested Balboa and put him on trial. Balboa was ultimately beheaded in 1519 CE. Pizarro, on the other hand, was rewarded with leadership positions in the newly forming city of Panama City. While the leader of Panama City, Pizarro began to hear tales of the city of gold, which the Spanish called El Dorado. Tales began to grow throughout Panama, and Pizarro found he was interested in exploring this famed city. The conquest of Mexico in 1521 also fueled rumors and pushed Pizarro to begin looking at South America. New stories of a large empire in South America began to circulate, centering around a civilization in the mountains, that was divided. Pizarro put together an expedition in 1524 CE, but this failed due to bad weather and negative relationships with the indigenous peoples. In 1526, Pizarro attempted his second expedition with his long-time trade partner with whom Pizarro agreed to divide the spoils of the conquest equally. After sailing south, the Spanish expedition ran into troubles with bad weather and fighting indigenous populations. Pizarro and his partner were constantly fighting about who should lead and how the expedition should be ran; this led them to dividing their men. On an island off the coast of Columbia, Pizarro divided the party by sending Almagro northward to Panama for more resources and men, while Pizarro decided to move south into Peru with only thirteen men. In 1528 CE, after several months at sea, Pizarro landed in Peru. He and his men were welcomed by indigenous people, who had numerous gold and silver decorations. Upon landing, Pizarro heard tales of a powerful king who ruled the area. He was afraid to attack with his sall number of men and returned to Panama for more resources. After much thought, Pizarro decided it best to ask the Spanish king for a request to formally conquer this new territory. This was to secure his position as the only ruler, if successful, and to ensure that he would be the most powerful man in the South American continent. After King Charles granted Pizarro his request, he began to plan for his expedition set for 1530 CE. Pizarro’s third expedition was successful in landing in Peru. He arrived near Caxas on the Peruvian coast and sent his commander Hernando de Soto to establish relationships with the local population. It was here that Pizarro learned that the Incan leader was very close in a city called Cajamarca. Pizarro marched a small number of men south to the city of Cajamarca to meet with Atahualpa. The meeting between the two leaders was disastrous. Following the Conquest of Mexico, the Spanish crown made new laws that said before war could be declared on a population a priest had to deliver a message that any indigenous peoples who converted to Christianity and swore allegiance to the Spanish king, as well as agreed to pay tribute, would be spared and war would be averted. During the meeting between Atahualpa and Pizarro, the priest told Atahualpa this command from the king of Spain. It was reported by the Spanish that Atahualpa said that he was no man’s tributary, and war then ensued. The Battle of Cajamarca on November 16, 1532 CE ended with the defeat and capture of Atahualpa. From legends of cities of gold, to the conquest of Mexico, Pizarro and his men were interested in getting the riches of the Inca. When Pizarro landed, seeing the gold and silver, he knew that there were vast riches in South America. With Atahualpa as a captive, Pizarro began demanding payment from the Inca for their leader. These ransom notes requested rooms full of gold and silver. At first, the Inca complied, giving the Spanish one room of gold and two of silver. Pizarro had made promises that he would release the leader when this was accomplished. Yet, when the Inca satisfied these conditions, Pizarro increased his demands. The divisions of the Inca started to show at this point, where the supporters of Huascar began to call for Atahualpa’s death, while supporters of Atahualpa wanted to continue to pay the Spanish for his release. It was clear by the middle of 1533 CE that Pizarro and the Spanish had no intention of releasing Atahualpa, after Pizarro drew twelve charges against Atahualpa. Pizarro convicted Atahualpa, and Almagro sentenced him to death in August 1533 CE. It is interesting to note, that there was division between the Spanish on what to do with Atahualpa. Pizarro and de Soto wanted Atahualpa to remain alive, while Almagro sentenced his death. The consequences of Atahualpa’s death were immediate; the division of the Inca became unified against the Spanish. The majority of the Incan leaders began to fight against the Spanish. It would take another 200 years and the death of another Incan leader named Túpac Amaru before the Spanish were able to peacefully integrate all of the Incan society into their reign. The integration of Peru was an important step for the Spanish Conquistadors, as they were able to successfully bring a second major empire in the Americas into their own growing political organization. The biggest difference between the Spanish conquest of the Inca was the system of trade and tribute that the Spanish gained from the Inca. The Spanish were highly interested in silver, and the Incan people brought tribute from the southern reaches of their territory. Additionally, the Spanish developed a system of forced labor called the Mita, which had originated from the Inca; the Mita used temporary forced labor to help finish projects, such as roads and bridges. However, the Spanish Mita was a bit different; the Spanish required each indigenous person to work in the mines of Potosí for a short period every several years without pay from the crown. The goal was to get as much silver from the region as possible. The problem was the Spanish used very harsh working conditions in the mining of silver and the population was not well taken care of. Silver mining was very dangerous in itself, but the other part of the Mita was the purification of the silver. After the rock was removed from the mountain, everything that was not silver had to be removed from the ore. To do this, the preferred method of the time was to boil mercury and put the silver ore in the mercury for purification. This was a very dangerous process and was very unhealthy due to the side effects of mercury poisoning. The process was first introduced by Viceroy Francisco de Toledo in the 1570s and became the backbone of the Spanish labor system in the Andes throughout the colonial period. It was estimated that 11,000 workers were forced into labor. One of the biggest effects of the Mita was the significant drop in the indigenous populations, due to harsh working conditions and unhealthy environments. The Incan empire became an important part of the economics of the Spanish in South America, the mining of silver was key to the Spanish empire and finding trade goods to send to China. Other Spanish Conquests Learning Objectives - Evaluate the differences between the colonization of empires versus the other regions of Latin America. - Evaluate the role of government and society in Latin American colonies. Other Conquests The Spanish conquistadors Cortés and Pizarro established a colonial stronghold that was the center of political, military, economic, and cultural life in the Americas. These centers of power were unique to the Spanish because other European powers did not find existing empires. The Spanish being early in the colonization of the Americas also meant that the extensive trade network that these centers provided allowed goods and diseases to travel quickly throughout the Americas. The quick spread of diseases is an important component of why later European explorers, most notably the English, pointed out the lack of indigenous populations in North America. The Spanish center approach meant that political and economic power was concentrated in either Mexico City or Lima, which meant the Spanish militaries were centered in these two cities. That caused a great deal of problems. For example, as the English were starting to gain more naval experience at the end of the 16th century, they were attacking the Spanish outskirts and robbing the Spanish of their treasures. The Spanish had a very difficult time with stopping the English buccaneers' forces because of the distance from the center to the outskirts. By the time the Spanish could react the English would have been gone for months. While there were both positives and negatives of the Center model of colonization, not all the conquistadors were happy with this arrangement and several wanted to explore to find new centers of their own. The lure of wealth and power swept through Spain as stories of the Inca and Aztec colonization became well known. This was the fuel for a new generation of conquistadors, who were eager to make it to the New World with a dream of finding that next indigenous empire to conquer. The problem with this mentality was that there were only the two major civilization centers in the Americas, and many of these newly energized conquistadors came to the New World with limited prospects. The Spanish explorers started traveling north from the Caribbean region into Florida and the American Southeast. Ponce de Leon traveled throughout Florida looking for a mythological fountain of youth, but what he found instead was that the land was very difficult to maintain agriculture and the indigenous population was very hostile to the Spanish. Hernando de Soto traveled throughout the American Southeast, establishing forts as far north as North Carolina. De Soto’s relationship with the indigenous population was very good, mainly because the Cherokee became one of the first indigenous groups to immediately adopt Spanish weapons and farming techniques. But there were no large indigenous civilizations to be found in the American Southeast, and de Soto turned southward to the Caribbean. Other conquistadors, such as Álvar Núñez Cabeza de Vaca went north from Mexico City to the American Southwest, near what is now Albuquerque, New Mexico. Finding no large civilization, Cabeza de Vaca returned back to Mexico City. Others, such as Almargo, went south from Lima in search of the next large empire in the Andes. Almagro found the expanse of the Bolivian desert to be too much and stayed closer to the Pacific Coastline, creating a tiny strip that later became Chile. These conquistadors never found the riches in the Americas that they longed for. It is important to note that the conquistadors, once they moved into a region, would often become upset at the lack of resources, interrogate the indigenous population, then move on towards a new goal. Often, these conquistadors would then leave behind priests and other Spaniards that would help to establish the region as a Spanish stronghold. The Periphery The conquistadors often would move quickly from place to place and leave behind other Spaniards who would do the majority of the work of colonization, especially in the periphery. The majority of the Spanish holdings were in what is considered periphery regions, which included what would later be known as California, New Mexico, Florida, Colombia, Venezuela, Chile, and the Rio de la Plata region. There was little government or need for large bureaucracy. This meant that the central power in many of these periphery areas was typically the Catholic Church. The Catholic Church became a major power in the periphery because the central mission of the Catholic Church was the indoctrination of Christianity to the indigenous populations. There were few questions and little care about the methods that the Catholic Church used to ensure that the indigenous peoples became Christians. For example, in the Rio de la Plata region of South America, the Jesuit priests created agricultural, shop craft workers and soldiers out of the Guaraní population indigenous to the region. In New Mexico, the Catholic Church used indigenous laborers and farmers to enrich itself. This led, of course, to revolts. In 1680 CE the Pueblo, of latter-day New Mexico, revolted and were successful in removing from the region the Catholic Church and the Spanish government for another 100 years. Life in the periphery was very different than in the center. Similar to the divisions of the urban and rural today, the periphery had limitations on how strong the colonial government could be. The laws that were to provide health and safety that were created in the cities, were often times not enforced in the periphery regions. This meant that many of the indigenous people suffered and were put in unsafe and unhealthy conditions. This demanding and dangerous work of indigenous people meant that there was a significant decrease in the indigenous populations throughout the 15th to 18th centuries. Spanish Colonial Culture Learning Objectives - Evaluate the differences in the colonial Latin American structures. Latin American colonial culture rested upon the mixture of African, indigenous, and European cultures. While most people think that the conquistadors were individuals who conquered large territories, it is important to note that these were usually single men from Spain. The conquistadors traditionally came from the lower class and were single males. This is important to note, because Spanish colonization affected by these men who found themselves surrounded by women from indigenous and African heritages. Very soon, it was clear to the Spanish administration that they needed to keep track and provided a chart to help understand and organize the different racial categories in the Spanish world. These were known as the Casta Charts, the name came from the Indian Caste system. The organization was to help colonial and bureaucratic leaders understand and know the populations that they served. While it appears that the Spanish system was very structured and individuals had only one option in life, this is not quite true. The Catholic Church kept records of births within the colonial system. An individual could go to a priest where their records were held, and ask the priest, for a fee, to remove the racial category that they were at and move them to a higher one. This type of bribery demonstrates that individuals in the Spanish system could purchase whiteness and move higher in the racial hierarchy. Being higher in the racial hierarchy meant better access to jobs and social circles. The division of ethnicity was one of the complicated measures that would divide the colonial Spanish Americas, another was birthplace. The Spanish used places of birth to assign political and economic powers. Spaniards born on the Iberian Peninsula were called Peninsulares. The Peninsulares were individuals who could rise to the level of governor; they had the ability to go to Latin America and had limited restraints on their power. People born in the Americas were called Creoles, they were individuals who had less power, usually they were not able to rise to middle to upper level of government. This division created deep resentment in populations in the Americas because these were quality jobs with political and economic powers attached. The division between Creole and Peninsulares created a long-term division that would help to push the Spanish colonial society to the brink of revolution in the end of the 18th century. Social circles and classes were very important to the colonial Spanish America. The encomiendas was where large estates and vast amounts of material wealth were centered. These newly forming estates were critical for the upper class and developed what historians have termed the plantocracy, a hierarchy based on plantation ownership. In plantocracies plantation owners are at the top, and their families are in the tier below, enjoying less power. Usually the plantation owner’s wife, known as the plantation mistress, would have been the second most powerful person on the plantation, followed closely by the plantation owner’s children. Because these farms were so big and needed so much help to manage, the plantation owners often times hired lower class whites to help manage farms and resources. This third tier is important because the community that came from outside the family life was central to the political and economic status of the plantation owner. The lowest rung was the slaves and indigenous populations that were forced to do the work; they were often brutalized and treated very terribly by those ranked above them. The class system in Spanish America demonstrated the key problems of class and race in the colonial world. The other way that Spanish Latin American culture was divided was along gender lines. The Spanish colonial system included rigid gender roles for both men and women. Women were expected to support the males and provide children. There were few jobs for women and limited educational opportunities. In popular culture, women inhabited one of two roles: either the Madonna or the prostitute. Men, on the other hand, were not held to the same standards and the role of masculinity was defined by domination. It is during this period that the development of the hypermasculine became the traditional role of men. The stark differences between men and women provides a unique lens when viewing such amazing women like Sor Juanita. Juana Inés de la Cruz was a Mexican writer, philosopher, composer, poet, and nun. She was a central figure during the Spanish Golden Age of literature. She was taught herself to read from a library that she inherited from her grandfather and began to write poems after becoming a nun. Sor Juanita became a voice for women and spoke out against the corruption of the church and the men of Mexico City. The Spanish system demonstrates how different they were from the English, French, or Dutch with their colonial worlds. The Spanish social division between creole and peninsulares was the critical division that other Europeans did not create. The role of African and indigenous in the Spanish system was another key difference from those of the English and the French. The centers of power meant that the Spanish integrated the indigenous populations quickly into their world as laborers, which led to their constant mistreatment by those at upper levels of the colonial society. One of the most significant points for the Spanish colonization was the economic resources that were extracted in the colonial peripheries that would have a critical role on the world stage. The Spanish colonial system had two significant components, the center and the periphery. The conquest of Mexico and the Inca were important because they were the empires that the Spanish built most of their own political power upon. Life in the periphery was dominated by the Catholic Church and centered on the relationship between the indigenous and farm life. The Spanish were different than their Portuguese counterparts in that the Portuguese had a very hands-on mentality. The “I mapped it, I own it,” provided a good starting point. The Spanish, on the other hand, used brutality to repress indigenous and African populations. While the Spanish had incredible amounts of resource wealth with the empire systems, other colonizers did not have such good luck, and were forced to focus their empires on trade relationships. Economics: Potosí The original goal of Europeans sailing westward was to find new ways to get to China and get more trade goods. The discovery of America was a serendipitous event that created new opportunities for Europeans. But while Latin America was growing economically profitable, Europeans were still wanting to gain a bigger footprint in China. However, the Chinese were not interested in any of the new products that the Spanish brought from the New World. The Spanish goods would go to China and would languish with little to no interest from Chinese buyers. The turning point for Spanish goods was the trade of silver from Latin America. The Spanish discovered the mountain of Potosí in the South American Andes mountains that was almost a pure silver vein. This mountain, in modern Bolivia, provided the majority of the silver Spain sent to China. During the Middle Ages, the printing of flying cash meant that the Chinese economy was heavily hit by rampant inflation. The government began demanding taxes to be paid in silver. With the Spanish importing silver in massive quantities, this meant that silver value began to decrease in comparison to other metals and the everyday person saw relief from their government debts. The importing of silver was a significant benefit to the average Chinese person, and this opened China for the Spanish. On the flip side, this caused significant problems for the Chinese, because the massive amounts of silver that was imported from Latin America caused rampant deflation of silver, and the value crashed. This caused a ripple effect that helped to destabilize the Chinese economy. In the Spanish empire, the rampant inflation caused ripple effects for the colonizer. The colonization of Latin America gave the Spanish access to large territories and many trade goods. But, on the Iberian side of the Atlantic, there were significant political and economic problems. The reign of Charles V was the high-water mark for the Spanish crown. Charles’s administration requested that all the silver that went to China first pass through the Iberian Peninsula. This meant that the Spanish added an extra leg of the journey for the silver and added vast amounts of cost for transport of that silver. This was at the same time of the Protestant Reformation and when Charles V, as Holy Roman Emperor, was attempting to squash the Protestants in the German territories. To leverage more war materials, Charles took loans against the silver coming out of Latin America. This put Spain in a weaker state because the silver was a key resource in the global trade with China, and European bankers understood that value. For many years, Charles took loans against the silver investments of Latin America, but eventually Spain became too indebted to bankers. This meant that the Spanish could no longer use the silver to finance wars, such as the Thirty Years War, as well as that they lost political and economic power in Europe. This weakened state had a dramatic effect on the colonial world. Throughout the 15th to 18th centuries, the Spanish had power over their colonies, but through unusual laws, cultural practices like the creole and government positions, and the growth of the British in the Americas, the Spanish empire lost significant political and economic power in the New World. This weakening of the Spanish led to a significant opening for other European colonizers, such as the French, Dutch and British. The Spanish in the Pacific Middle-aged but bold, Ferdinand Magellan sought to strengthen Portuguese claims in the Pacific. Specifically, he sought a westward route to the Spice Islands. This precarious, uncharted route would give the Portuguese uncontested access to the Spice Islands. However, unimpressed by the proposal, the Portuguese king quickly dismissed Magellan. [b]Not dissuaded, Magellan immediately turned his attention to Portugal’s direct rival, Spain. Unlike his Portuguese counterpart, King Charles I was quick to support Magellan’s endeavor. In 1519, under Spain’s banner, Magellan’s fleet set forth on the new, westward route across the Atlantic and Pacific to the Spice Islands. Magellan’s voyage was fraught with trouble for months. Disease, malnutrition, starvation and mutiny all plagued his fleet. Harsh seas battled their ships for eighteen months, as the crews navigated the fierce waters around the horn of South America, now famously known as the Straights of Magellan. In spring 1521, the crews spotted Guam. A month later, they landed in the present-day Philippines. Reception of Magellan by the indigenous peoples in the Philippines was mixed. At times, the Europeans were treated as guests. Other encounters proved hostile. Hostility arose over Magellan’s attempt to convert local inhabitants to Christianity. The chieftain of the Mactan tribe in the Philippines considered the new arrivals a serious threat. In April 1521, conflict exploded between the Mactan peoples and Magellan’s forces. The Spanish were overwhelmed, and Magellan was speared and killed in the battle. The surviving Spanish retreated to Spain, bedraggled and defeated. Fifty years after the defeat by the Mactan, the Spanish returned to the Philippines under the leadership of Miguel Lopez de Legazpi. After that they remained a dominant presence in the Philippines, establishing a stronghold at Manila: “The Pearl of the Orient.” With the Spanish domination of Manila came the spread of Catholicism. Augustinians, Jesuits, Franciscans, and Dominican friars and missionaries established themselves in the Philippines. And conversion spread throughout the Philippines. Manila grew into a cosmopolitan city that outshone Seville in its brilliance and diversity. A unique, blended culture of Spanish, Chinese, Malay, Tagalong, and Muslim peoples and customs [4]emerged. But like its Portuguese rivals, the Spanish capital in the Philippines remained under threat of internal and external attack. The trade network which the Spanish had worked so hard to establish, flourished. However, it would not be long before new rivals threatened to destroy everything the Spanish had worked to create. Primary Source: Letter from Christopher Columbus Letter from Christopher Columbus [Abridged] Christopher Columbus (1493) On the thirty-third day after leaving Cadiz I came into the Indian Sea, where I discovered many islands inhabited by numerous people. I took possession of all of them for our most fortunate King by making public proclamation and unfurling his standard, no one making any resistance. To the first of them I have given the name of our blessed Saviour, whose aid I have reached this and all the rest; but the Indians call it Guanahani. To each of the others also I gave a new name, ordering one to be called Sancta Maria de Concepcion, another Fernandina, another Isabella, another Juana; and so with all the rest. As soon as we reached the island which I have just said was called Juana, I sailed along its coast some considerable distance towards the West, and found it to be so large, without any apparent end, that I believed it was not an island, but a continent, a province of Cathay. But I saw neither towns nor cities lying on the seaboard, only some villages and country farms, with whose inhabitants I could not get speech, because they fled as soon as they beheld us. I continued on, supposing I should come upon some city, or country-houses. At last, finding that no discoveries rewarded our further progress, and that this course was leading us towards the North, which I was desirous of avoiding, as it was now winter in these regions, and it had always been my intention to proceed Southwards, and the winds also were favorable to such desires, I concluded not to attempt any other adventures; so, turning back, I came again to a certain harbor, which I had remarked. From there I sent two of our men into the country to learn whether there was any king or cities in that land. They journeyed for three days, and found innumerable people and habitations, but small and having no fixed government; on which account they returned. Meanwhile I had learned from some Indians, whom I had seized at this place, that this country was really an island. Consequently I continued along towards the East, as much as 322 miles, always hugging the shore. Where was the very extremity of the island, from there I saw another island to the Eastwards, distant 54 miles from this Juana, which I named Hispana; and proceeded to it, and directed my course for 564 miles East by North as it were, just as I had done at Juana… …The inhabitants of both sexes of this and of all the other island I have seen, or of which I have any knowledge, always go as 2 naked as they came into the world, except that some of the women cover their private parts with leaves or branches, or a veil of cotton, which they prepare themselves for this purpose. They are all, as I said before, unprovided with any sort of iron, and they are destitute of arms, which are entirely unknown to them, and for which they are not adapted; not on account of any bodily deformity, for they are well made, but because they are timid and full of terror. They carry, however, canes dried in the sun in place of weapons, upon whose roots they fix a wooded shaft, dried and sharpened to a point. But they never dare to make use of these; for it has often happened, when I have sent two or three of my men to some of their villages to speak with the inhabitants, that a crowd of Indians has sallied forth; but when they saw our men approaching, they speedily took to flight, parents abandoning children, and children their parents. This happened not because any loss or injury had been inflicted upon any of them. On the contrary I gave whatever I had, cloth and many other things, to whomsoever I approached, or with whom I could get speech, without any return being made to me; but they are by nature fearful and timid. But when they see that they are safe, and all fear is banished, they are very guileless and honest, and very liberal of all they have. No one refuses the asker anything that he possesses; on the contrary they themselves invite us to ask for it. They manifest the greatest affection towards all of us, exchanging valuable things for trifles, content with the very least thing or nothing at all. But I forbade giving them a very trifling thing and of no value, such as bits of plates, dishes, or glass; also nails and straps; although it seemed to them, if they could get such, that they had acquired the most beautiful jewels in the world. From The University of Texas at Austin, Thomas Jefferson Center for the Study of Core Texts & Ideas Primary Source: Aztec Account of Spanish Colonization In 1519 Hernan Cortés sailed from Cuba, landed in Mexico and made his way to the Aztec capital. Miguel LeonPortilla, a Mexican anthropologist, gathered accounts by the Aztecs, some of which were written shortly after the conquest. Speeches of Motecuhzoma and Cortés When Motecuhzoma [Montezuma] had given necklaces to each one, Cortés asked him: "Are you Motecuhzoma? Are you the king? Is it true that you are the king Motecuhzoma?" And the king said: "Yes, I am Motecuhzoma." Then he stood up to welcome Cortés; he came forward, bowed his head low and addressed him in these words: "Our lord, you are weary. The journey has tired you, but now you have arrived on the earth. You have come to your city, Mexico. You have come here to sit on your throne, to sit under its canopy. "The kings who have gone before, your representatives, guarded it and preserved it for your coming. The kings Itzcoatl, Motecuhzoma the Elder, Axayacatl, Tizoc and Ahuitzol ruled for you in the City of Mexico. The people were protected by their swords and sheltered by their shields. "Do the kings know the destiny of those they left behind, their posterity? If only they are watching! If only they can see what I see! "No, it is not a dream. I am not walking in my sleep. I am not seeing you in my dreams.... I have seen you at last! I have met you face to face! I was in agony for five days, for ten days, with my eyes fixed on the Region of the Mystery. And now you have come out of the clouds and mists to sit on your throne again. "This was foretold by the kings who governed your city, and now it has taken place. You have come back to us; you have come down from the sky. Rest now, and take possession of your royal houses. Welcome to your land, my lords! " When Motecuhzoma had finished, La Malinche translated his address into Spanish so that the Captain could understand it. Cortés replied in his strange and savage tongue, speaking first to La Malinche: "Tell Motecuhzoma that we are his friends. There is nothing to fear. We have wanted to see him for a long time, and now we have seen his face and heard his words. Tell him that we love him well and that our hearts are contented." Then he said to Motecuhzoma: "We have come to your house in Mexico as friends. There is nothing to fear." La Malinche translated this speech and the Spaniards grasped Motecuhzoma's hands and patted his back to show their affection for him.... Massacre in the Main Temple During this time, the people asked Motecuhzoma how they should celebrate their god's fiesta. He said: "Dress him in all his finery, in all his sacred ornaments." During this same time, The Sun commanded that Motecuhzoma and Itzcohuatzin, the military chief of Tlatelolco, be made prisoners. The Spaniards hanged a chief from Acolhuacan named Nezahualquentzin. They also murdered the king of Nauhtla, Cohualpopocatzin, by wounding him with arrows and then burning him alive. For this reason, our warriors were on guard at the Eagle Gate. The sentries from Tenochtitlan stood at one side of the gate, and the sentries from Tlatelolco at the other. But messengers came to tell them to dress the figure of Huitzilopochtli. They left their posts and went to dress him in his sacred finery: his ornaments and his paper clothing. When this had been done, the celebrants began to sing their songs. That is how they celebrated the first day of the fiesta. On the second day they began to sing again, but without warning they were all put to death. The dancers and singers were completely unarmed. They brought only their embroidered cloaks, their turquoises, their lip plugs, their necklaces, their clusters of heron feathers, their trinkets made of deer hooves. Those who played the drums, the old men, had brought their gourds of snuff and their timbrels. The Spaniards attacked the musicians first, slashing at their hands and faces until they had killed all of them. The singers-and even the spectators- were also killed. This slaughter in the Sacred Patio went on for three hours. Then the Spaniards burst into the rooms of the temple to kill the others: those who were carrying water, or bringing fodder for the horses, or grinding meal, or sweeping, or standing watch over this work. The king Motecuhzoma, who was accompanied by Itzcohuatzin and by those who had brought food for the Spaniards, protested: "Our lords, that is enough! What are you doing? These people are not carrying shields or macanas. Our lords, they are completely unarmed!" The Sun had treacherously murdered our people on the twentieth day after the captain left for the coast. We allowed the Captain to return to the city in peace. But on the following day we attacked him with all our might, and that was the beginning of the war From Miguel LeonPortilla, ed., The Brohen Spears: The Aztec Account of the Conquest of Mexico (Boston: Beacon Press, 1962), pp. 6466, 129131. Primary Source: Las Casas Destruction of the West Indies A Short Account of the Destruction of the Indies Bartolome de las Casas (1542) The Americas were discovered in 1492, and the first Christian settlements established by the Spanish the following year. It is accordingly forty-nine years now since Spaniards began arriving in numbers in this part of the world. They first settled the large and fertile island of Hispaniola, which boasts six hundred leagues of coastline and is surrounded by a great many other large islands, all of them, as I saw for myself, with as high a native population as anywhere on earth. Of the coast of the mainland, which, at its nearest point, is a little over two hundred and fifty leagues from Hispaniola, more than ten thousand leagues had been explored by 1541, and more are being discovered every day. This coastline, too, was swarming with people and it would seem, if we are to judge by those areas so far explored, that the Almighty selected this part of the world as home to the greater part of the human race. God made all the peoples of this area, many and varied as they are, as open and as innocent as can be imagined. The simplest people in the world - unassuming, long-suffering, unassertive, and submissive - they are without malice or guile, and are utterly faithful and obedient both to their own native lords and to the Spaniards in whose service they now find themselves. Never quarrelsome or belligerent or boisterous, they harbour no grudges and do not seek to settle old scores; indeed, the notions of revenge, rancour, and hatred are quite foreign to them. At the same time, they are among the least robust of human beings: their delicate constitutions make them unable to withstand hard work or suffering and render them liable to succumb to almost any illness, no matter how mild. Even the common people are no tougher than princes or than other Europeans born with a silver spoon in their mouths and who spend their lives shielded from the rigours of the outside world. They are also among the poorest people on the face of the earth; they own next to nothing and have no urge to acquire material possessions. As a result they are neither ambitious nor greedy, and are totally uninterested in worldly power. Their diet is every bit as poor and as monotonous, in quantity and in kind, as that enjoyed by the Desert Fathers. Most of them go naked, save for a loincloth to cover their modesty; at best they may wrap themselves in a piece of cotton material a yard or two square. Most sleep on matting, although a few possess a kind of hanging net, known in the language of Hispaniola as a hammock. They are innocent and pure in mind and have a lively intelligence, all of which makes them particularly receptive to learning and understanding the truths of our Catholic faith and to being instructed in virtue; indeed, God has invested them with fewer impediments in this regard than any other people on earth. Once they begin to learn of the Christian faith they become so keen to know more, to receive the Sacraments, and to worship God, that the missionaries who instruct them do truly have to be men of exceptional patience and forbearance; and over the years I have time and again met Spanish laymen who have been so struck by the natural goodness that shines through these people that they frequently can be heard to exclaim: 'These would be the most blessed people on earth if only they were given the chance to convert to Christianity.' It was upon these gentle lambs, imbued by the Creator with all the qualities we have mentioned, that from the very first day they clapped eyes on them the Spanish fell like ravening wolves upon the fold, or like tigers and savage lions who have not eaten meat for days. The pattern established at the outset has remained unchanged to this day, and the Spaniards still do nothing save tear the natives to shreds, murder them and inflict upon them untold misery, suffering and distress, tormenting, harrying and persecuting them mercilessly. We shall in due course describe some of the many ingenious methods of torture they have invented and refined for this purpose, but one can get some idea of the effectiveness of their methods from the figures alone. When the Spanish first journeyed there, the indigenous population of the island of Hispaniola stood at some three million; today only two hundred survive. The island of Cuba, which extends for a distance almost as great as that separating Valladolid from Rome, is now to all intents and purposes uninhabited;" and two other large, beautiful and fertile islands, Puerto Rico and Jamaica, have been similarly devastated. Not a living soul remains today on any of the islands of the Bahamas, which lie to the north of Hispaniola and Cuba, even though every single one of the sixty or so islands in the group, as well as those known as the Isles of Giants and others in the area, both large and small, is more fertile and more beautiful than the Royal Gardens in Seville and the climate is as healthy as anywhere on earth. The native population, which once numbered some five hundred thousand, was wiped out by forcible expatriation to the island of Hispaniola, a policy adopted by the Spaniards in an endeavour to make up losses among the indigenous population of that island. One God-fearing individual was moved to mount an expedition to seek out those who had escaped the Spanish trawl and were still living in the Bahamas and to save their souls by converting them to Christianity, but, by the end of a search lasting three whole years, they had found only the eleven survivors I saw with my own eyes. A further thirty or so islands in the region of Puerto Rico are also now uninhabited and left to go to rack and ruin as a direct result of the same practices. All these islands, which together must run to over two thousand leagues, are now abandoned and desolate. From Modern History Sourcebook, Fordham University Attributions Attributions Images courtesy of Wikimedia Commons: https://en.wikipedia.org/wiki/Luis_de_Mena#/media/File:Casta_Painting_by_Luis_de_Mena.jpg Boundless World History https://www.coursehero.com/study-guides/boundless-worldhistory/the-age-of-discovery/ https://www.coursehero.com/study-guides/boundless-worldhistory/spain-and-catholicism/ Work based around the ideas of Patricia Seed: Ceremonies of Possession in Europe's Conquest of the New World, 1492–1640
oercommons
2025-03-18T00:35:08.546773
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87886/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87887/overview
French Colonies in the Americas and the Caribbean Sea Overview Initial French Expeditions across the Atlantic Ocean Learning Objectives - Analyze the differences in how Europeans established different colonial models in the Atlantic and Indian Ocean Worlds. - Compare and contrast the Spanish, French, Dutch, English, and Portuguese colonial systems. Key Terms / Key Concepts Christopher Columbus - Genoese explorer credited with the discovery of the Americas New France: first French colony in North America, established along the St. Lawrence River Mercantilism: economic ideology embraced by European imperial powers, based on the concept that colonies were founded to benefit the countries that founded them During the Age of Exploration/Discovery, the French—along with the Spanish, Portuguese, English, and Dutch—established settlements and colonies in the Americas and the Caribbean Sea. These settlements and colonies were part of the unification of humanity across the Atlantic and Pacific Oceans, along with the mercantilist economic development of these European powers. This mercantilist development went hand-in-hand with the imperial competition and struggle among these powers. While the British, the Portuguese, and the Spanish colonial empires eclipsed the French colonial empire in the Americas and the Caribbean, the French colonial presence still left a mark on and legacies for the Americas and the Caribbean islands that are still evidence in present times. French mariners, among other European mariners, did not initially come to the Americas to establish colonial settlements. They sought a northwest passage across the Atlantic and the Pacific Oceans to Asia. Finding lands, natural resources, and rivers, among other geographic features, was serendipity. During the first half of the sixteenth century the French government sponsored expeditions led by two explorers across the Atlantic Ocean. Florentine mariner Giovanni da Verrazano, sailed across the Atlantic in 1524. Verrazano was one of a number of explorers during this period, including Christopher Columbus, who worked for other countries other than their own. French King Francis I asked Verrazano to make the trip in search of new trade routes. Verrazano traveled up the Atlantic coast from present-day South Carolina to the coast of Nova Scotia, without finding a passage to Asia. The second French-sponsored explorer, Jacques Cartier led three expeditions across the Atlantic between 1534 and 1542. During the first two he explored the Gulf of St. Lawrence and the St. Lawrence River. His third expedition, in 1541 – 42, was an unsuccessful effort to establish a French settlement on the St. Lawrence River. Cartier’s expeditions laid the foundation for New France. During the second half of the sixteenth century France suffered through religious discord and warfare growing out of the Reformation; this discord distracted French efforts in exploring and/or settling along the Atlantic coast until the early seventeenth century. French North America Learning Objectives - Analyze the differences in how Europeans established different colonial models in the Atlantic and Indian Ocean Worlds. - Compare and contrast the Spanish, French, Dutch, English, and Portuguese colonial systems. Key Terms / Key Concepts New France: first French colony in North America, established along the St. Lawrence River Louisiana: second French colony in North America, established along the Mississippi River Huguenots - members of the Protestant Reformed Church of France during the 16th and 17th centuries; inspired by the writings of John Calvin Mercantilism: economic ideology embraced by European imperial powers, based on the concept that colonies were founded to benefit the countries that founded them 1763 Treaty of Paris: treaty that ended the 1754-61 “Great War for Empire, providing for French loss of its North American colonies and paving the way for disputes that led to the American Revolution In North America the French established two huge colonies, each along a major North American river. The first of the two was New France founded along the St. Lawrence River. The second was Louisiana with the Mississippi River as its axis. The French, like the English, established their first lasting settlements in the early seventeenth century. In contrast the Spanish had established their first settlements in North America during the sixteenth century. Division over the Reformation in the sixteenth century hindered both English and French efforts to explore and settle North America. During the last third of the sixteenth century religious divisions between Catholics and Huguenots, embodied in a succession of religious wars, nearly tore apart France; this prevented the government from committing resources to the construction of a colonial empire in the Americas. With the conclusion of religious hostilities in France in 1598, the French government under Henry IV could devote more resources to the establishment of a permanent, if small, French presence in present-day eastern Canada. During that period, the latter half of the sixteenth century, fishermen dominated the French presence in the St Lawrence River valley and coast of eastern Canada. The growth of French fishing in the northwestern Atlantic led to the establishment of winter settlements, the development of a fur trade, and more contacts with indigenous peoples, all activities not requiring an extensive colonial presence. New France The single most important individual in the early development of New France was Samuel de Champlain, a partially enigmatic figure who dedicated his energies to seeing that New France thrived as a colony and not just a collection of outposts. Founded in 1608, Quebec was the first settlement of New France, and it has lasted to the present day. Over the next forty years French colonists founded Trois Rivieres in 1634 and Montreal in 1642. Those two settlements, along with Quebec, would become the three small urban centers of a slowly growing New France. The original focus of New France and Louisiana was the fur trade. The French government also made modest efforts to encourage migrants to settle for the purpose of farming, in order to establish self-sufficiency. The original political, religious, and social structures of New France were taken from those of early modern and medieval France, partly rooted in that nation’s feudal institutions, practices, and structures. The original seigneural system for land distribution was taken from the feudal system of land tenure in France. As part of this system seigneurs held title to landed estates. The lands of these estates were distributed to settlers, known as habitants, for the purpose of farming. Remnants of this system survived into the nineteenth century. The fur trade required the French colonists to interact with indigenous peoples of the region, both through diplomacy and warfare. The fur traders, settlers, missionaries, and government officials of New France developed a complex set of relationships with these people that were shaped by assorted and antagonistic interests. Their first interactions were with the Huron and the Iroquois. By the mid-seventeenth century the withdrawal of the Huron and Iroquois from the St. Lawrence River valley opened new opportunities for French immigrants in the fur trade and farming. Regardless, the colonial population continued to grow slowly because of the distance of the colony from France, the climate, and the perception of limited economic opportunities. During the 1660s New France experienced a significant improvement in fortunes when Louis XIV made this colony a priority in his pursuit of an expanding French global empire. Louis XIV and his chief, Jean-Baptiste Colbert, saw French colonies in terms of how they could benefit France, as part of the ideology of mercantilism. Louis and Colbert made the organization of an effective colonial government in New France a priority. New France now received more of the attention and resources it needed to grow and develop, including its placement under the authority of the Department of the Marine. However, even with this new attention to its development, New France continued to grow slowly during the rest of the seventeenth century and into the first half of the eighteenth century. Maturation of New France During the first half of the eighteenth century, specifically between the Wars of the Spanish and the Austrian Succession, 1713 – 1744, New France matured as a colonial society. A number of Canadian historians have characterized it as a golden age. During this period the economy of New France expanded unevenly, largely as a result of the relative peace between the British and French North American colonies, as well as between Britain and France around the world. French economic expansion and relative prosperity during the first half of the eighteenth century was grounded in mercantilism. The French government valued New France, among the other French colonies, for its natural resources and as markets for manufactured goods, above and largely to the exclusion of all else. In the mercantilist economies of the eighteenth-century European empires, raw materials and markets were all that mattered, notwithstanding any lip service paid to the Christian missionary impulse. During this period the culture of New France did not so much mature as blossom, fed by population growth and the new wealth generated, which led to economic growth and prosperity. This maturation of New France, from the early eighteenth century, was marked by the continuity of economic, political, religious, and social institutions and practices from early modern and medieval France; the militarization of New France as a necessary response to the threat of English conquest and Iroquois hostility; and the economic opportunities afforded by the resources of New France. Louisiana The French government established the second French North American colony, Louisiana, in 1682. The axis of this colony was the Mississippi River, explored extensively by Robert de La Salle as part of the events leading to the establishment. In a number of ways, Louisiana was an extension of New France to the north. As with New France the fur trade was the initial economic engine of Louisiana. Coureurs des bois—French traders—drove the development of this trade. Louisiana grew even more slowly than New France, being more difficult to reach for potential French colonists and possessing fewer visible economic incentives. Fewer than ten thousand European immigrants settled in French Louisiana during the eighteenth century. Most of these lived in New Orleans, the colony’s most populous city, or other settlements along the Mississippi River and its tributaries. French colonial society in French Louisiana did not mature beyond these scattered and mostly small settlements that punctuated these river valleys. Consequently, this French Louisiana colonial society was mostly what the French settlers had brought with them from France. This French colonial culture did not have much time to interact and merge with indigenous and African cultural elements before French Louisiana was divided by the Spanish and the British as part of the 1763 Treaty of Paris that ended the 1754-61 war between them, also known as the French and Indian War. The End of New France and Louisiana One of the key factors in France’s loss of its North American colonies was the small population of each colony, most of whom lived along the Mississippi and St. Lawrence Rivers, and their tributaries. While the French government claimed hundreds of thousands of square kilometers on both sides of each river, the population of both, at the time that France lost them in the 1763 Treaty of Paris, was less than 100,000; on the other hand, the population of the British colonies along the Atlantic coast was over a million. French settlers in these two colonies were spread out in a number of small settlements, punctuated by a few larger settlements—such as Montreal, New Orleans, and Quebec—which would evolve into large cities beginning in the nineteenth century. Regardless of a colonial population of nearly 100,000 French subjects, the French government ultimately saw New France and Louisiana as little more than defensive and offensive bastions in the military struggle for North America and pawns or chips in the peacemaking process that concluded each war. By the late seventeenth century, the British, the French, and the Spanish vied for control of various parts of present-day Canada and the United States, outside of Alaska. The colonial struggle between Britain and France, also known as the Second Hundred Years War, was punctuated by four wars, concluding with the French and Indian War. With the 1763 Treaty of Paris that ended the French and Indian War, New France and Louisiana became part of the British and Spanish North American empires. By which ever name, the residents of this area have struggled to find their place ever since the British annexation. With respect to the fate of French Louisiana the 1763 treaty divided this colony along the Mississippi River between the Spanish west of the river and the British east of the river. During the late eighteenth and early nineteenth centuries this dividing line along the Mississippi River continued to be significant in international diplomacy with the 1783 Treaty of Paris that established the Mississippi as the western border of the new United States, as well as the 1803 Louisiana Purchase by which the U.S. acquired much of what France had claimed as Louisiana. As the French colonial presence along the Mississippi River was sparse in 1763, descendants of these French subjects adapted to and/or embraced the dominant U.S. culture with the advance of U.S. settlement during the nineteenth century. French Colonies in the Caribbean Sea and South America Along with colonies in North America, the French also established a number of colonies among the Winward Islands along the eastern edge of the Caribbean Sea and one on the northern edge of South America. The French colonies in the Caribbean Sea were smaller geographically than other colonies, but proportionately more profitable because of staple crops, such as sugar and tobacco grown on these islands. Accordingly, these Caribbean and South American colonies also garnered more attention and resources from the French government. The French began settling the Caribbean during the early seventeenth century. They were part of the same European imperial competition then occurring in the Americas. The French established settlements on a crescent-shaped chain of islands in the eastern Caribbean, running from the northern crown of South America to Puerto Rico. The French also settled the western half of the island of Hispaniola. During the seventeenth and eighteenth centuries a small elite group of slaveholding plantation owners came to control these French possessions, emerging as major players in France’s developing global colonial empire. France’s single colony in South America, Guyane, located on the northern crown of that continent, was also dominated by a sugar plantation economy, but it enjoyed only modest development and prosperity as measured by the mercantilist standards of the time. Learning Objectives - Analyze the differences in how Europeans established different colonial models in the Atlantic and Indian Ocean Worlds. - Compare and contrast the Spanish, French, Dutch, English, and Portuguese colonial systems. Key Terms / Key Concepts New France: first French colony in North America, established along the St. Lawrence River Louisiana: second French colony in North America, established along the Mississippi River Middle Passage - the voyage across the Atlantic from Africa to the Americas, comprised the middle leg of the trans- Atlantic slave trade Slavery in the French America and Caribbean Colonies Geography, climate, and staple crops dictated where European colonists embraced slavery in the Americas and the Caribbean Sea. While it existed only marginally in New France and Louisiana, slavery thrived in the French Caribbean, and, to a lesser extent, in Guyane. French slavery in the western hemisphere was part of the slaveholding system of the Atlantic World. The majority of the slave labor was to make sugar. This is very intensive work, and the French began to important more and more slaves to meet this demand. As part of this system Europeans purchased slaves along the west coast of Africa, likely over twelve million between 1400 and 1800. These slaves then endured the horrific Middle Passage across the Atlantic Ocean to the Caribbean islands, South America, and, to a lesser extent, North America. It is here that African and European cultures began to mix together as can be seen by language, religion, and food cultures. For example, the vudu is a combination of indigenous African religions and Catholicism This blending of African and European cultures is one that was very different than their original cultures. Slaves who ended up in the French Caribbean and Guyane helped to shape the cultures of the western hemisphere, a role largely unrecognized by European and European-American historians until the twentieth century. These slaves brought their own cultures with them, which combined with European and indigenous cultural influences and formed the new cultures of the western hemisphere. While European settlers and European-Americans controlled the underlying processes by which these cultures evolved and matured, they could not exclude the African and African-American cultural presence of the peoples they had enslaved. These African cultural influences are still present in the Caribbean islands settled French colonists. Legacies of the French Colonial Presence in the Americas and the Caribbean While the French had lost their North American colonies by the late eighteenth century, and their possessions in South America and the Caribbean had become imperially insignificant by the end of the nineteenth century, the French colonial presence left its mark on the western hemisphere. Most visible is the French-Canadian province of Quebec, an evolved culture of the original New France culture, influenced as it has been with the surrounding Anglo-Canadian culture. French linguistic culture is also present in the various Caribbean islands on which the French founded colonial settlements. This French presence in the western hemisphere, while overshadowed by the English and Spanish cultural presence, has added to the diversity in the Americas and the Caribbean. Attributions Licenses and Attributions CC LICENSED CONTENT, SHARED PREVIOUSLY - Curation and Revision. Provided by: Boundless.com. License: CC BY-SA: Attribution-ShareAlike CC LICENSED CONTENT, SPECIFIC ATTRIBUTION - Title Image - 1699 Quebec print. Attribution: Charles Bécart de Fonville (1675-1703), Public domain, via Wikimedia Commons. Provided by: Wikipedia. Location: https://commons.wikimedia.org/wiki/File:Vue_de_Qu%C3%A9bec_en_1699_avec_l%C3%A9gende_sur_les_quartiers.jpg. License: CC BY-SA: Attribution-ShareAlike - Age of Discovery. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Mercantilism. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - French colonization of the Americas. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - New France. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - French colonial empire. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Sovereign Council of New France. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Carib Expulsion. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - 1024px-Nouvelle-France_map-en.svg.png. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Cartier.png. Provided by: Wikipedia. License: Public Domain: No Known Copyright
oercommons
2025-03-18T00:35:08.580805
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87887/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87888/overview
Dutch Trade in the Pacific, South Africa, and Japan Overview Dutch Trade in the Pacific, South Africa, and Japan Holland (also called The Netherlands) entered the world of trading and shipping relatively late in comparison to its European rivals: Spain and Portugal. Early efforts by the Dutch to establish ports and trade networks in South Africa and the Far East were met with limited success due to underfunded missions, as well as rival claims by the Portuguese and the English. The Dutch were very interested in getting trade goods from the region that they colonized. One of the qualities of the Dutch that made other regions, like China and Japan so willing to trade with them, was that the Dutch's interest in only trade matters. Contrastingly, the Dutch found unexpected success developing trade with Japan. Indeed, during Japan's seclusion, Holland became the "world's window" into a closed society. Learning Objectives - Examine Dutch influence and trade in the Pacific. - Evaluate the relationship between Dutch traders and Japan during the Japanese period of isolation. - Examine the impact and legacies of the Dutch settlement in Cape Town. Key Terms / Key Concepts Spice Islands: nickname given by Europeans to island-nations in Indonesia VOC: Dutch abbreviation for the Dutch East Indies Trading Company; a joint-stock company heavily involved in the Spice Trade Spice trade: lucrative trade in exotic, “eastern” spices, such as nutmeg Batavia: (present-day Jakarta) former capital of the Dutch East India Trading Company in the Pacific Cape Town: 1652 settlement by the Dutch in South Africa near the Cape of Good Hope Khoisan: a group of diverse, indigenous Africans who lived in south and southwest Africa Boer: Dutch name for Dutch farmers who settled permanently in South Africa archipelago: a chain of islands Sakoku: the period of Japanese isolation during the early seventeenth to mid-nineteenth centuries Deshima: island in Nagaski Bay off Japan's coast that was home to Dutch settlers and artists during Japan's isolation period The Dutch in the Pacific and the Rise of the VOC Background In 1596, the Dutch arrived at the present-day island of Java in the South Pacific. Rich in natural resources and especially spices, the Dutch saw the Spice Islands as a market rich with economic potential. Coffee, indigo, wood, and Asiatic spices such as nutmeg were all discovered on the island. However, early Dutch settlements were under-protected. As a result, an omnipresent threat of attack from the British or Spanish existed. At the turn of the seventeenth century, two economic-political events evolved that catapulted the Dutch into the forefront of trade in the Far East. First, rivalry expanded between European countries as wealth from colonies poured into Spanish, Portuguese, and English coffers. The second event was the sharp escalation of Dutch wealth. Described as the first capitalists, the Dutch developed a strong banking business with Amsterdam as their economic center. Pulling from their capital and investors, the Dutch strengthened their navy to fully engage in the lucrative trade with the Far East. The VOC and the Spice Trade In 1602, the Dutch consolidated their merchant sailors and traders to form the massive, Dutch East India Trading Company, abbreviated as VOC. The VOC served as the political, military, and commercial power for the Dutch in Southeast Asia. Through the successive years, the Dutch built forts, storage facilities, and plantations that stretched from the Spice Islands in the South Pacific, all the way to South Africa. Under the direct administration of the ruthless administrator, Jan Pieterszoon Coen, the Dutch established a strong trade network with a capital at Batavia, which is present-day Jakarta. Dutch fleets were deployed to patrol the waterways and coasts surrounding the Spice Islands and other Dutch territories. Dutch legacy towards Pacific Islanders is riddled with conflict and bloodshed. Beginning in 1621, the Dutch dispatched forces to quench resistance on the Banda Islands. Because their military technology far outpaced that of the islanders, by the end of the conflict over 15,000 indigenous peoples had been massacred by the Dutch. Survivors were forced off their lands and into a system of forced agriculture. They were required to strip their farms of all crops, growing instead spice trees such as clove and nutmeg. As these spice farms increased in size and scope, local food sources diminished, and many islanders starved. Dutch Decline in the Pacific For nearly two centuries, the Dutch retained a monopoly on the spice trade. Despite their efforts, though, their monopoly dwindled as spice trees were spread beyond their islands to places as far as Africa’s eastern coast. And yet, the Dutch remained a powerful influence in Indonesia for the proceeding century. Their capital city flourished as an international trading port where Chinese ships moored, and Portuguese merchants traded in the streets. Wealth flowed into Holland’s banks, but even more striking was its status as a significant world power in political and commercial affairs. With colonies established in the Philippines and throughout the Pacific, Holland had become an empire. But even as their influence in the Pacific grew, so too did internal rivalries and threats. And Dutch eyes were turned to further expansion of their realm. The Dutch in South Africa Dutch exploration was not limited solely to the Pacific Islands. As with the Portuguese and English, the Dutch sought to trade with Southeast Asia. However, as trade routes to Southeast Asia were charted, the southern coast of Africa became a natural halfway stop for sailors traveling from Europe to Asia. By the sixteenth century, a port emerged on the cape of South Africa--Table Bay. For a century, Europeans sailed into this southern port, trading with local Khoisan clans who raised and herded livestock. Then, in the mid-seventeenth century, a Dutch explorer landed at Table Bay; and altered it forever. The Founding of Cape Town In 1652 Dutch explorer Jan van Riebeck landed three ships at Table Bay. Immediately, he began to construct a permanent settlement in the name of the VOC. His immediate goal was to negotiate fair prices for meat between the local Khoisan clans, and the Dutch sailors who stopped off at Table Bay on their way to Batavia in the Pacific. The settlement grew in size and in its Dutch population and soon took the name, Cape Town. Trading between the Khoisan and the Dutch initially prospered. Prices were regulated. The Dutch procured livestock, especially cattle. In exchange, the Dutch did not force their religious or social beliefs onto the Khoisan people. In contrast to other European colonizers who sought to spread Christianity and European social practices, the Dutch largely respected Khoisan traditions and practices. Moreover, the Dutch introduced European fruits and vegetables to the diets of people in Cape Town. Initial goodwill soon gave way to reluctant tolerance, though. And then to resentment and conflict. Khoisan Resistance and the Emergence of the Boers For their part, the Khoisan were uneasy about the permanent Dutch settlement at Cape Town. Within five years of the settlement's establishment, the Dutch demanded impossibly high cattle quotas. In exchange, they offered luxury goods such as beads and precious gems, but little in the way of practical goods. Worse developments came within the first few years of the settlement. The Dutch VOC assessed the situation at Cape Town and determined that they had more soldiers than necessary. Slowly, soldiers were released from their service with the VOC and allowed to establish private farms outside of Cape Town. These Dutch farmers encroached on traditional Khoisan land. And are forever remembered by their Dutch name, Boers. (Boer means "farmer" in Dutch). Tensions boiled over in 1659, seven years after the founding of Cape Town. United Khoisan clans attacked the Boer farms outside of Cape Town and drove the settlers back into the city, within the walls of the VOC fortress. Despite strong efforts, though, the Khoisan were unable to successfully overtake the fortress. Instead, they negotiated terms with Jan Van Riebeck, who told them in no uncertain terms, Khoisan land was now Dutch land. Legacies of Dutch Settlement Jan van Riebeck's settlement of Cape Town established far more than a permanent Dutch trading post. It led to the permanent establishment of Dutch men, women, and children throughout South Africa. Not only traders and artisans but also the Boers who would transform South African agrarianism. Moreover, the Dutch presence in South Africa demonstrated the growing power of the VOC to a global audience; thereby increasing rivalries in Europe. For the Khoisan, the arrival of the Dutch was an initially beneficial relationship that turned sour in less than a decade. Their ancestral grazing lands were illegally seized; their traditional ways of farming were depleted and abused. While the 1659 conflict between the Khoisan and the Dutch saw unity between the Khoisan clans, it would not be repeated. Clannish differences prevented further unity and allowed for the Dutch, and later the English, to secure a strong foothold in South Africa. A Window to the World: The Dutch Presence in Japan Dutch success in both South Africa and the Pacific Islands positioned them well to explore another set of islands in the Pacific. Ones largely shrouded in secrecy--the Japanese archipelago. Although the Dutch were not the first Europeans to engage in exploration or commerce with the Japanese, they quickly won Japanese favor in the early 1600s. In contrast to Catholic European Jesuits and Franciscans who had visited Japanese shores, the Protestant Dutch remained largely focused on trade rather than converting Japanese people to Christianity. Economically progressive and more respectful of Japanese culture and beliefs, the Dutch sailors and settlements became the world's window to Japan during the period of sakoku. The Landing of the Liefde In 1598, a young English navigator named William Adams sailed with the Dutch fleet from Rotterdam toward China aboard the Dutch vessel, Liefde. As representatives of the VOC, the sailors would engage in trade and commerce in the Far East. Storms, disease, and skirmishes with the Portuguese and Spanish soon turned the voyage perilous. In 1600, the Liefde limped toward Japanese shores. The landing of the Liefde marked a critical turning point in Japanese foreign relations at the time. The Period of the Warring States had recently ended with the triumphant victory of the powerful daimyo, Tokugawa Ieyasu. Under his leadership, the Japanese islands were unified for the first time in recent history. But the Tokugawa rule was militaristic, hyper-conservative, and sought to eliminate all threats--including the proselytizing Europeans who tried to convert Japanese citizens to Christianity. Tokugawa Ieyasu considered the European presence subversive and sought to remove Europeans, especially the Portuguese and Spanish, from Japan. And yet, when the Liefde crashed ashore with William Adams and twenty Dutch sailors, the Japanese immediately provided help to the survivors. Moreover, interest arose in the "Red-haired barbarians." Tokugawa Ieyasu recognized the Dutch as different from other Europeans in two key areas. Firstly, the Dutch were capitalist traders and economically prosperous and progressive; quite possibly more than any other European nation. Secondly, they were Protestant and uninterested in preaching conversion or changing Japanese cultural values. Moreover, the Dutch possessed state-of-the-art military and naval technology. For these reasons, Tokugawa Ieyasu found ways to invite the Dutch VOC to trade in Japan on a limited scale. In 1603, Tokugawa Ieyasu received the title, Shogun--supreme military leader of Japan. His policies soon forced Europeans from Japan's shores with one exception--the Dutch. Early Trading Relations Trade between Tokugawa Japan and the Dutch VOC began in earnest in 1609 when Dutch ships consistently appeared in Japan's southern bays. Respected for their purely business interests, a special relationship developed between the Dutch and the Japanese. Dutch factories appeared in Hirado on Japan's southernmost island and ultimately replaced the Portuguese factories and workers. Some Dutch sailors settled permanently in Hirado and became subjects of fascination to Japanese artists who marveled at their red beards. In exchange for Western military technology such as muskets and canons, the Japanese traded fine arts and porcelain products. The Dutch later sold these finely-made pieces of art for exorbitant prices in Europe. The price of Japanese-made products skyrocketed in the 1630s following the Tokugawas's enactment of sakoku-- a foreign policy that expelled foreigners and made Japan a closed society. No one could enter or leave the country's border without the Shogun's knowledge and permission. While this significantly reduced the European presence, the Shogun continued to make an exception for the Dutch. Although even they would no longer be entirely welcome within Japanese society for the next two hundred years. The Dutch Relocate to Deshima The Dutch maintained close connections to Japan during the sakoku. The relationship was not without problems, however. Xenophobic attitudes and anti-foreign policies escalated during the 1630s and 40s in Japan. Under the Tokugawa Shogunate, the Portuguese were forcefully evicted and Christians were violently persecuted. In Hirado, tension mounted between the Dutch and the Japanese. Dutch industry continued to thrive, but anti-foreign measures increased within Japan. In 1640, the Shogunate restricted Dutch freedom of movement. The same year, a Dutch merchant unthinkingly engraved "Anno 1640" just below his warehouse roof. The use of the Christian-Latin dating system was enough for the Tokugawas to relocate the Dutch. Forced from Hirado, the Dutch were relocated to a manmade island previously constructed for the Portuguese a century early--Deshima Island. Located in Nagasaki Bay, Deshima was small. But the Dutch relocation proved enormously successful. It allowed the Japanese to enforce their isolationist policy, but also attract the attention of the world. From August to October, Dutch VOC ships unloaded cargo from around the world, and returned to Europe with Japanese lacquerware, teas, and silks. Moreover, Dutch sailors shared stories of Japan--a country cloaked in mystery to the rest of the world. The Dutch who lived permanently at Deshima also attracted attention. They became subjects of endless fascination for Japanese artists and scholars. They served as translators, and many Dutch words slowly were integrated into Japanese. Over time, the Japanese took a more active role in learning from the Dutch at Deshima. By the early 1700s, the Japanese removed the ban on Dutch books, with the exception of religious texts. The result was a practice of "learning from the Dutch" called Rangaku." Schools and institutions emerged within Japan in which students studied Dutch concepts of anatomy, botany, chemistry, and military science. For more than a century, the VOC supplied Japan with Western knowledge and ideas. But while Japan learned much about the Western world through books, the Western world remained largely ignorant of Japan until 1853 when American, Matthew Perry, arrived in Tokyo Bay. Attributions Images from Wikimedia Commons Matsuda, Matt K. Pacific Worlds. Cambridge University Press, 2012. 70; 77. The Netherlands and You. "Japan and the Netherlands." https://www.netherlandsandyou.nl/your-country-and-the-netherlands/japan/and-the-netherlands/dutch-japanese-relations
oercommons
2025-03-18T00:35:08.609271
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87888/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87889/overview
English Colonization Overview English Colonization The English were very different than the Spanish, Portuguese, French, or the Dutch in their colonization methods. This was very important point. The English focused mostly on trade, with limited engagements with indigenous populations overall. The English also promoted self reliance and governments. Learning Objectives - Compare and contrast the differences of the English and the other colonizers. - Analyze the differences between the English colonial systems of the East and West. - Evaluate the role of indigenous, African, and Europeans in the English colonial system. - Analyze the impact of the indentured servants on the English system. Key Terms / Key Concepts Jamestown: The first permanent English settlement in the Americas, established by the Virginia Company of London as "James Fort" on May 4, 1607, and considered permanent after brief abandonment in 1610. It followed several earlier failed attempts, including the Lost Colony of Roanoke. Roanoke: Also known as the Lost Colony; a late 16th-century attempt by Queen Elizabeth I to establish a permanent English settlement in the Americas. The colony was founded by Sir Walter Raleigh. The colonists disappeared during the Anglo-Spanish War, three years after the last shipment of supplies from England. English The English model of colonization brought key elements of the Spanish, French, and Dutch colonies together in one approach. The lateness of the English colonization meant that they were heavily influenced by the Spanish and provided a foil to the Spanish colonization. One of the critical components of the English colonization models is the lack of cohesion between the colonies. By not following a uniformed model of colonization, this would cause great difficulty and future rebellions between the English and their colonial worlds. The English seemed to be the most interested in both gaining territory and gaining money. The English approach to the North American colonies is one that is centered around hedonistic capitalism and religious freedoms. The English colonization in the Indian subcontinent is one that is also divided between the New World and the Indian Subcontinent, where the English divisions proved central to the ultimate division of the Mughal Empire and eventual British East Indies Company Raj. During the first wave of colonization, the English were the last European country to begin to colonize. Partly due to the lack of resources and technology that other Europeans had, the English had a very difficult time to establish a colonial presence. The Treaty of Tordaellsias was another problem that the English colonists had to overcome. The treaty divided the world between the Spanish and the Portuguese, but left out the other European colonizers, and by having the word of the Pope, this meant that the English were not about to disobey the Christian Church to gain colonies. Early English explorers were divided in their approaches to colonization. Some of the early English focused their attention on the northern reaches of the world, attempting to find the mythical Northwest Passage. Explorers such as John Cabot, who explored the lands of Nova Scotia, Newfoundland, and Labrador near Canada. Cabot sailed in the late 15th century for the English king Henry VIII. The English established a colony on these islands but they were never successful, partially due to the political and economic turmoil in England during the Tutor Dynasty. The lack of resources from the Canadian coastline also made it difficult to ensure deeper connections to the English colonization and community. Other English colonizers and settlers followed a different path, focusing instead on finding ways to integrate and take from the Spanish. Many of these colonizers were interested in attacking the Spanish and causing disruptions to the Spanish supply lines that were small but significant wounds for the Spanish to overcome. English sailors such as Sir Francis Drake, helped to wreak havoc on the Spanish supplies in Latin America. Sir Francis Drake was born in 1540 CE in England and grew up during the Elizabethan Era of England. Drake spent his early life around the sea and traveling for the English as a merchant and trader around the Northern Sea Ports in Europe. Recent historians note that some of Drake’s economic success rested upon slave trading in his early 20s. By venturing into African waters, Drake was provoking the Portuguese, who had a massive hold on the slave trade at the time. Drake’s antagonism of the Portuguese early on in his career would be the bedrock of his political and economic fortunes. From there, Drake began to attack the Spanish ships and their cargo. By raiding several of the Spanish ships, he began to amass a fortune of silver and gold leaving the New World. Drake became very well known in the Spanish and English worlds for different reasons. The Spanish became very upset by the constant raiding and destruction, while the English queen Elizabeth began to find favor in Drake and started having his seat at the English Court. This type of harassment of the Spanish was important for the English because it provided much needed funds that went to help the English to continue to grow and expand their colonial operations, and secondly, it provided key navigational and structural techniques on how to be better sailors. Sir Francis Drake’s circumnavigation of the world proved to be very profitable, not only did he gain massive amounts of Peruvian gold and silver, but it demonstrated that the English were on their way to becoming a global empire. Upon his return to England, in 1581 CE, Drake was knighted by Elizabeth and his fortunes continued to grow. Queen Elizabeth relied on Drake to not only provide silver and gold to the English empire, but also to help fuel the English colonization. The antagonistic relationship that Elizabeth had with the Spanish King Phillip II, meant that Elizabeth publicly disavowed Drake, but secretly pushed him to continue his harassment of the Spanish. Ultimately this harassment led the Spanish to build an armada to attack and stop these attacks. Drake knew that the Spanish were building massive warships and his expertise from years of harassing the Spanish proved effective. Drake helped to design the English strategy of smaller ships that were lighter and easier to move in the water against the larger and bulkier Spanish ships. Drake’s strategy proved successful, for when the Spanish arrived and attempted to invade England, the English ships were able to repel and keep the Spanish from landing. The English defeat of the Spanish Armada in 1588 CE was the turning point for the English naval policy. The English became the rulers of the sea with their superior ships and weapons. Drake was one of the key members of a group known as the Buccaneers, who were English pirates in the New World. Buccaneers created significant supply and critical shortages of Spanish silver and gold from the New World. This was a massive problem for the Spanish, that would eventually lead to their downfall as a colonial world power. Sir Francis Drake paved the way for other English explorers and settlers, as the English naval understandings grew, so too did the new desire for growing colonies in the New World. One of the critical differences between the English and the Portuguese and Spanish was the English use of the joint-stock companies. In today’s world, the voyage to the New World would be equivalent to going to Mars. It is extremely dangerous, expensive, and hard to get to this location. If you were interested in going to Mars, think about the funds that you have currently, it probably be much more than one individual could fund. But, if you were able to talk to your friends and their families and demonstrate how it would benefit them when you and your company makes it to Mars, that you might get funds. You would need to write a receipt that demonstrated the money they give you is proportional to the amount that you need, but also they get a part of that profit. The super high risk is worth a super high reward. This approach today would be called a stock and is how many of the companies in the United States are financed, through buying and selling of stocks. In the 15th to 17th centuries, it was incredibly expensive to go to the New World. By offering stock options, companies took the incredible individual risk of the adventure from very high, then to spread throughout the stock holders and made this lower. The use of the joint-stock company would not only benefit the risk/profit of the voyage, but also it created a group of investors that became increasingly wealthy due to the spreading of the rewards reaped from the successful adventures. Overtime, the English removed the word joint, and these companies became simply stock companies. The Dutch had a similar investment type of colonization with the Dutch East Indies and West Indies Companies. These provided travel funds and lobbied for colonization and opening of markets. The British companies, on the other hand, were central for establishing colonization in the North American world. The English exploration flourished following the defeat of the Spanish Armada in 1588 CE. New expeditions led by Sir Walter Raleigh to the New World would eventually pave the way for the first successful English colony. The changing North American political map was very important for the English to find a region that they could establish a colony. The Spanish dominated the South America and Florida, while the French had gained a colonial reign in the extreme northern region of North America. The English had to find the region between these two European powers to establish their own settlements. The English first attempt was in North Carolina’s Outer Banks region at the colony of Roanoke in 1585 CE. The colony failed for many reasons, including not enough English support and failures of political leadership to supply the colony with much needed resources and maintenance. The second attempt at colonization was at Jamestown, in 1607 CE. It is the Jamestown colony that demonstrates how fragmented the early English vision of colonization was. Many of the English future colonists read and discussed the tales and writings of the Spanish conquest. These future English colonists thought that there were many empires like the Aztec and Inca in the Americas, that if the English could establish a colony, they could put themselves at the top of an indigenous empire. The charter of the Jamestown colony puts forward that the goals of the colony were to, “give and take Order, to dig, mine, and search for all Manner of Mines of Gold, Silver, and Copper, as well within any Part of their said several Colonies…” One of the main reasons for establishing a colony of Jamestown was to gain as much mineral resources, such as gold and silver, as possible, meant that the majority of the population that were middle and upper class males that were interested in getting wealthy and powerful in the Americas. English Colonial America: Differences of Plymouth and Jamestown This mindset was one of the central problems that the future colonists of Jamestown had, because their interest in extracting wealth, meant that the Jamestown was not established for long term growth by having families or individuals who had practical farming skills. Hence, the year following the establishment of Jamestown, there was a prolonged period of starvation. The Jamestown colony was on the verge of failing until in 1611 CE. Learning Objectives - Analyze the impact of Jamestown on the English system. - Evaluate the government and economics of Jamestown. Key Terms / Key Concepts Plymouth: An English colonial venture in North America from 1620 to 1691, first surveyed and named by Captain John Smith. The settlement served as the capital of the colony and at its height, it occupied most of the southeastern portion of the modern state of Massachusetts. Navigation Acts: A series of English laws that restricted the use of foreign ships for trade between every country except England. They were first enacted in 1651, and were repealed nearly 200 years later in 1849. They reflected the policy of mercantilism, which sought to keep all the benefits of trade inside the empire, and minimize the loss of gold and silver to foreigners. Tobacco revitalized the Jamestown colony by introducing a large cash crop that could easily be produced in the region and provided great wealth to growers. Tobacco was one of the key crops of the Colombian Exchange, where Once the settlers arrived in the Virginia area, they found there were indigenous peoples, but many of the English remarked that the American landscape was very empty and devoid of life. This is probably because of the disease that were introduced during the Spanish colonial period and had decimated the indigenous population. The other component is that the English were expecting to find large groups like the Spanish and there were none of these that were still left in the North American continent at the time of English arrival. The groups that the English did find, were local bands of Powhatans that were a part of the Algonquin indigenous groups. The Powhatans were friendly to the English and showed these settlers how to farm and grow local foods. The English, who were more interested in gold and expansion, thought that the local Powhatans would be the basis of their new English empire, wanted the indigenous populations to do the work to grow the food. This meant that the Powhatans quickly left the English after demonstrations of how to grow their own food. It is important to note, that the majority of the first English settlers were males, similar to the Spanish colonization model. The biggest difference in the Spanish and English colonial societies relationship with indigenous populations is that the English were not interested in starting families with the indigenous populations. There was a very distinct separation between the English and the indigenous populations. The English, on the other hand, were interested in expansion and this meant that the Powhatans had to defend their homes and ways of life if they were to survive against the English settlement. The English found that their luck of finding large empires of gold and silver were extremely limited and thus, they were beginning to run out of resources and scarcity of the winter set in in 1609-1610 CE. This winter saw limited food and individuals went as far as cannibalism and digging up the bodies of recently deceased for food. Of the nearly 500 people in Jamestown, only 61 were alive in the Spring of 1610 CE. The Jamestown colony struggled in the first few years seems like an understatement. The introduction of tobacco was a major economic success and turned the fortunes of the English colony of Jamestown around. Tobacco was introduced in 1611 CE, by Sir Walter Raleigh. The English loved tobacco since it was introduced to Europe by the Spanish almost a century before. The English had many smokehouses throughout London and it was even seen as a nuance by King James I, who wrote about the harmful affects of the drug. By having tobacco grown in Virginia, meant that the English colonies could make massive profits and keep the money inside of the English economic system. This had such a dramatic effect when tobacco was successfully grown in Virginia, that much of the focus of the English were to find ways to build up the economics of the Virginia colonies. There is a great irony that the saving grace of the Jamestown colony was not food that could easily be consumed by the grower, after the Starving Time of Jamestown. The trade of goods for tobacco was the key way that the colonists could purchase their goods from the English company store. This relationship also had a dramatic effect on labor of the Jamestown colony as well. Tobacco is a heavy labor intensive crop, from planting to curing, there is much time and effort put into making tobacco. The land has to be set by the farmer to produce tobacco by cleaning away the land, then it has to be sewn into the ground. Afterwards, the crop has to be tended, which takes upwards of four months to go from seed to crop, usually meaning that there is constant watering and removal of insects. The harvest usually happens in the late summer, usually July to August, where the large leaves need to be “cured,” meaning that they are to be stored and dried out. From here, the crop is to be shipped and chopped into finer parts before it can be made into cigars, which was the popular way of consuming tobacco in the 16th century. The English want for the addictive plant meant that there was a large amount of money to be made in Virginia. The problem was the intensity of the labor was usually much more than a small farmer could manage. This meant that the English developed a unique labor system called indentured servitude. This was where an American farmer could go to England and offer a contract for a set time frame for work to be completed. Many of the indentured servant contracts were for seven years for men, and usually were for three years for women. English law was written that the indentured servant system appeared to benefit both the servant and the farmer in many ways. The farmer got a worker for a set time and could make any demands of the worker that the farmer wanted. Also, the farmer got land, usually near their original home area for bringing an English person to the Americas. The servant also was rewarded in English law. Many of the servants were lower class and could not afford to travel to seek fortunes in the Americas. This was a way to get to the Americas, as a positive trade off. For the men, once completing their contract, they were promised lands as well that they too could then become farmers in Virginia. While this appears to benefit both sides, there are significant problems with the indentured servant system. First, the average lifespan of the indentured servant was approximately 3 years in colonial Virginia, while many of the male contracts were for 7. This is due to harsh working environments, demands of the master, and diseases such as malaria that were rampant in the colonial Americas. Historians often note that this is very similar to a form of slavery. Female indentured servants were usually domestic workers and had laws that protected them from forced marriages to their masters during the indentured servant contract. The lack of women in Virginia meant that women had a premium experience and many times the contracts were shortened because of the valuable domestic services that women provided. The downside, is that women were not given the same opportunities after their contract expired, and many ended their contracts with marriage to the master without being granted their own lands. Also, the lands that were given to newly freed male indentured servants usually were at the western territories of the Virginia colony, that were often in disputes with indigenous populations because the English would without treaty simply give these lands without asking the indigenous populations. The harshness of the indentured servant life resulted in many running away from their owners. Primary sources linked here demonstrate how difficult it would have been to identify indentured servants that ran-away. The system of indentured servants was one that was very risky and fraught with problems, from harsh working conditions, to contract time that meant the workers often never benefitted from their contract, to those that earned their freedom not able to have lands that were not in dispute with indigenous populations. The breaking of the indentured servant system was the Nathaniel Bacon Rebellion in 1675-1676 CE. Nathaniel Bacon was an indentured servant that worked to gain his freedom and lands, but because of political problems that the Virginia governor faced, could not successfully get these. In the 1670s period, the Virginia governor William Berkley saw the increasing problems with the indigenous population. Berkley’s decision was to make peace treaties with the indigenous populations on the western borders of Virginia that said that the English would travel or own no more lands west of a line of demarcation. The indigenous populations were happy and this stopped many raids and fighting between the colonists and the English settlers. But Berkley had a secondary problem, that much of those lands were where indentured servants were promised and a growing population of newly freed indentured servants who felt that the promise of their contract was not being fulfilled. Nathaniel Bacon was a leader of this growing group of discontented newly emancipated indentured servants. He led a small force against the governor of Virginia, demanding their contract lands. Bacon was successful in torching the Jamestown settlement and chasing Berkeley from Virginia. In the resulting chaos, Bacon led his men in anger against the indigenous populations and raided and murdered several different groups in the Virginia region. Bacon was able to capture Berkeley and drew a gun pointed at Berkeley’s chest demanding changes, but Berkeley would not budge on his orders. This meant that Bacon knew that his demands would not be met and held Jamestown for months. It took almost a year before the rebellion broke apart, mostly due to Bacon dying of dysentery. The result of the Bacon Rebellion was clear to the colonial administration, that something had to be done to clearly distinguish the indentured servants, freedom, and who owned land. Bacon’s Rebellion was the key turning point because indentured servitude was no longer favored as a key method of labor in the English colonial system. Instead, the English started relying on the system of African slavery that started in 1619 CE. Slavery in the English system started early but changed as a direct result of the Nathaniel Bacon Rebellion. The first slaves arrived in the English North America in Virginia in 1619 CE. At first, the colonial society was not clear what this meant for the African populations. Africans originally were brought as indentured servants. It is important to note that this was essentially slavery, and that the treatment of the African population that was brought to the early Virginia colony was very difficult. By the 1630s CE, there were several emancipated African populations earned their freedom and lands. Following the Nathaniel Bacon’s rebellion, all African populations were transformed into enslaved. The form of slavery that the English developed their system known as chattel slavery, where those that were enslaved and all their descendants were enslaved for all future times. This system was very brutal because it meant that if an individual was born into slavery, that their family and descendants would remain enslaved forward. By enslaving African populations, this meant that it was very clear to the English colonists who was free and who was enslaved. This type of slavery continued from the middle of the 17th to the middle of the 19th century and would form many of the social and political problems of the American colonies unifying to the American Civil War. The role of African populations in English society was very unique as well, at the bottom tier meant that they were treated very terribly by all in the English system. African American women were subject to abuses by both the male and the female white owners. Labor went beyond simply producing crops, but also extended into the family support work, such as domestic labor as well. The English system of race was heavily influenced by their historic relationships in England and would have a significant influence on future colonization. The English had a very different historic relationship with race than other European colonizers. For example, the Spanish invasion in 711 CE of the Berbers from Northern Africa had a profound impact on the Spanish integration of diverse populations into their society. The English, on the other hand, were invaded by other Europeans throughout their history. This has a profound impact on the English understanding of race and ethnicity. Because of the lack of race meant that as the English were expanding throughout the world in the Early Modern period. The English had a very difficult time integrating and treating others, such as African and indigenous populations into the English society. For example, the English did not integrate the indigenous into their colonial society in Jamestown. The indigenous populations were push to the outside of the English system. Also the English would take lands and break treaties with the indigenous populations. The mistreatment of the indigenous population would only intensify moving forward as the English traveled throughout the world and would continue this lack of integration of populations. The treatment of the Afro-English populations was also demonstrated in the 17th century of exclusion. The relationship of power between the English and other populations becomes an either/or situation; where the individual is either English, or they do not have any political or economic power. The English would carry these ideas far beyond the North American shores, into the Indian subcontinent as well during subsequent colonization. In 1672, the Royal African Company was inaugurated, receiving from King Charles a monopoly of the trade to supply slaves to the British colonies of the Caribbean. From the outset, slavery was the basis of the British Empire in the West Indies and later in North America. Until the abolition of the slave trade in 1807, Britain was responsible for the transportation of 3.5 million African slaves to the Americas, a third of all slaves transported across the Atlantic. The introduction of the Navigation Acts led to war with the Dutch Republic. In the early stages of this First Anglo-Dutch War (1652-1654), the superiority of the large, heavily armed English ships was offset by superior Dutch tactical organization. English tactical improvements resulted in a series of crushing victories in 1653, bringing peace on favorable terms. This was the first war fought largely, on the English side, by purpose-built, state-owned warships. After the English monarchy was restored in 1660, Charles II re-established the navy, but from this point on, it ceased to be the personal possession of the reigning monarch, and instead became a national institution, with the title of “The Royal Navy.” As the English were developing their North American southern colony, they began a second colonial project in the American north. The English development of the colony of Plymouth took lessons of the first English colonization. To understand the issues of the Plymouth colony’s origin, it is important to start with the Protestant Reformation and the political and cultural transformations in the English system. When Henry VIII created the Anglican Church started the deep divisions in the English Christian community. These would intensify with subsequent English rulers and the English Civil War. The reign of Charles I intensified the want and desire for reformation of the Anglican Church for more purity. These reformers would be come known as the Puritans, wanting to purify Catholicism from the Anglican Church. Many of these individuals wanted full separation from the Anglican Church. Many of the Puritans were middle to upper class and had wealth. They were strict adherents to the Calvinist thought about reading and writing for the individual as well as putting importance of family ahead of social belonging. Because of the political turmoil in England, many of these individuals left England to go to the Netherlands because of similar Protestantism and freedoms of movements. The English that went to the Netherlands were there for approximately 10 years before they found the culture too unfamiliar and yearned for their children to be raised in more English customs and culture. The Puritans gained a company charter in 1619 CE, to establish the Plymouth colony. The congregation could apply for a company charter for the New World to establish an area that they could be in control of and practice their own faith. The English crown granted them a charter to land near Virginia and allowed the Puritans to leave in June 1620 CE. The planning and settlement of Plymouth helped to demonstrate the key differences between the Puritans and Jamestown. The first difference was the members of the voyage in the Mayflower were middle class and traveled with complete families. Having women and families was a key difference between the Jamestown and Plymouth colonies. The stability of family meant that Plymouth’s population did not suffer the way that Jamestown did having families. Families also provided structure to the Plymouth society as well. The second difference between the English colonizations was the Plymouth colony centered around the Puritan Church. In Jamestown colony, the Christian church was not the center of society, whereas in the Plymouth colony the Puritan church was the key to society. The relationship of the Puritan Church went well beyond the cultural, it also went to the political. Another key difference between the English colonization and the Spanish, French, Portuguese, and Dutch was the role of the government in the colonial society. In the Spanish and Portuguese, the colonial governments were established by the crown and many of the decrees were established by the monarchy. The French crown also held great control over the colonial world, but had a bit more leniency between the French individual and the crown. The Dutch also allowed for more control and the company charter had the majority of the political and economic stability of the colony. The English, on the other hand, allowed their North American colonies to establish their own governments. The Jamestown colony established the House of Burgess, where men, of good standing, white, property owning, and over 21 could elect their representative at the colonial level. This would prove to be a very important relationship because the Jamestown colony’s establishment meant that the colonies could raise their own taxes and establish their smaller rules for political understandings. The Puritans, also established their own rules of government that began with the Mayflower Compact. This was a charter that every male on the Mayflower agreed that they could participate by either direct elections or holding direct vote in the colony. The difference between the House of Burgess and the Mayflower Compact was that the Puritan society stipulated that participation in the colonial government was predicated on the participation in the Puritan church. This also highlights the differences between the House of Burgess and the Mayflower Compact by showing how important the role of the Christian church was between these two societies. Both, the House of Burgess and the Mayflower Compact, these societies could levy taxes for their perspective colonial governments. Yet, the British crown did not ask the colonies to pay into the larger British tax system. Meaning, that if you were in the colonial societies, that you did not pay the British taxes. It is unclear why the British would allow such a very important oversight of the colonial world, but it would prove to be very important when considering the divisions between the colonies and the English with the relationship of taxes. Self representing democracies were essential for the British system of colonization and provided a key difference between the different colonial models. The Plymouth colony eventually became known as the Plymouth plantation and was relatively successful early on in their social organizations. The Plymouth colony had a short period of starvation, that was much less dramatic than the Jamestown colony. The Plymouth colony could not plant or produce tobacco, thus the need for indentured servants and slaves was much less. Most of the farming in the Plymouth colony was subsistence or export of key materials such as timber. The less labor needs, meant that the Plymouth colony functioned on trade and trading much heavier than the Jamestown colonial world and would develop in a very different direction. Overtime, the Puritans split because of religious and cultural differences and thus individuals Brooke away from the Plymouth colony to establish their own colonial worlds. Anne Hutchinson and Thomas Hooker would eventually leave the Plymouth colony and founded their own colonies nearby in Rhode Island and Connecticut. The model of the role of the church and state remained throughout these newly formed and developed colonial worlds. The English developed different colonial models in the Jamestown and Plymouth colonies. The role of church and state, the economics, the culture, even as far as gender relationships were early on, defined and held to the core ideas of each of these respective regions. While both of these colonial models were important, it is easier to imagine these are the two poles of the North American colonial model, that on one side is Jamestown and the other is Plymouth, the subsequent colonies that emerged were blended these ideas in more of a grey between. For example, the American south developed in the Jamestown model, but different colonies in North Carolina to Maryland, were different on key issues. The same can be said for New England colonies as well. The region that was between these two poles was most notable the Middle colonies, of Pennsylvania, New York, New Jersey, and Delaware. These colonies took elements of both sets of colonial models as inspiration for their colonies. Pennsylvania is an excellent example of such a blend. William Penn established the colony for religious freedom for Quakers, a Protestant religion that believed that the chosen were touched by God and would shake or quake. This was a peaceful Protestant group that believed that there was a role for me, women, and indigenous in the community. Pennsylvania had great farming region and relied some on indentured servant labor, but not many slaves entered into the Pennsylvanian world. This blend of elements from Jamestown and Plymouth defined the Middle Colonies and would be shared throughout the North American world. The North American colonial experimentation demonstrates the English system was very difficult and had unique qualities to colonization. The English reliance on harassing the Spanish and establishing their own empire base was one of significant importance for the English. The establishment of a colonial society that had a self representation in government also provided a unique challenge and standard for the English as well. While the English system in the North American world had its unique similarities and differences, the British colonization of the Indian subcontinent held some of these similar idiosyncrasies. The English were also interested in exploring South Asia, specifically in the Indian subcontinent region. The British followed many of the Dutch and French settlements in the region to establish settlments, but were utlimately successful by incorporating a model of company control, similar to Jamestown. This would prove benefitial in the colonization and conquest of India, when the British East Indies Company came to control the region. This will be explored later, but note that many of the same cultural, political, economic, and social systems established by the English here in the early days of exploration, would continue forward with the English colonization of South Asia as well. Primary Source: Nathaniel Bacon's Rebellion: The Declaration Bacon’s Rebellion: The Declaration Nathaniel Bacon (1676) Economic and social power became concentrated in late seventeenth-century Virginia, leaving laborers and servants with restricted economic independence. Governor William Berkeley feared rebellion: “six parts of Seven at least are Poore, Indebted, Discontented and Armed.” Planter Nathaniel Bacon focused inland colonists’ anger at local Indians, who they felt were holding back settlement, and at a distant government unwilling to aid them. In the summer and fall of 1676, Bacon and his supporters rose up and plundered the elite’s estates and slaughtered nearby Indians. Bacon’s Declaration challenged the economic and political privileges of the governor’s circle of favorites, while announcing the principle of the consent of the people. Bacon’s death and the arrival of a British fleet quelled this rebellion, but Virginia’s planters long remembered the spectacle of white and black acting together to challenge authority. 1. For having, upon specious pretenses of public works, raised great unjust taxes upon the commonalty for the advancement of private favorites and other sinister ends, but no visible effects in any measure adequate; for not having, during this long time of his government, in any measure advanced this hopeful colony either by fortifications, towns, or trade. 2. For having abused and rendered contemptible the magistrates of justice by advancing to places of judicature scandalous and ignorant favorites. 3. For having wronged his Majesty’s prerogative and interest by assuming monopoly of the beaver trade and for having in it unjust gain betrayed and sold his Majesty’s country and the lives of his loyal subjects to the barbarous heathen. 4. For having protected, favored, and emboldened the Indians against his Majesty’s loyal subjects, never contriving, requiring, or appointing any due or proper means of satisfaction for their many invasions, robberies, and murders committed upon us. 5. For having, when the army of English was just upon the track of those Indians, who now in all places burn, spoil, murder and when we might with ease have destroyed them who then were in open hostility, for then having expressly countermanded and sent back our army by passing his word for the peaceable demeanor of the said Indians, who immediately prosecuted their evil intentions, committing horrid murders and robberies in all places, being protected by the said engagement and word past of him the said Sir William Berkeley, having ruined and laid desolate a great part of his Majesty’s country, and have now drawn themselves into such obscure and remote places and are by their success so emboldened and confirmed by their confederacy so strengthened that the cries of blood are in all places, and the terror and consternation of the people so great, are now become not only difficult but a very formidable enemy who might at first with ease have been destroyed. 6. And lately, when, upon the loud outcries of blood, the assembly had, with all care, raised and framed an army for the preventing of further mischief and safeguard of this his Majesty’s colony. 7. For having, with only the privacy of some few favorites without acquainting the people, only by the alteration of a figure, forged a commission, by we know not what hand, not only without but even against the consent of the people, for the raising and effecting civil war and destruction, which being happily and without bloodshed prevented; for having the second time attempted the same, thereby calling down our forces from the defense of the frontiers and most weakly exposed places. 8. For the prevention of civil mischief and ruin amongst ourselves while the barbarous enemy in all places did invade, murder, and spoil us, his Majesty’s most faithful subjects. Of this and the aforesaid articles we accuse Sir William Berkeley as guilty of each and every one of the same, and as one who has traitorously attempted, violated, and injured his Majesty’s interest here by a loss of a great part of this his colony and many of his faithful loyal subjects by him betrayed and in a barbarous and shameful manner exposed to the incursions and murder of the heathen. And we do further declare these the ensuing persons in this list to have been his wicked and pernicious councilors, confederates, aiders, and assisters against the commonalty in these our civil commotions. Sir Henry Chichley William Claiburne Junior Lieut. Coll. Christopher Wormeley Thomas Hawkins William Sherwood Phillip Ludwell John Page Clerke Robert Beverley John Cluffe Clerke Richard Lee John West Thomas Ballard Hubert Farrell William Cole Thomas Reade Richard Whitacre Matthew Kempe Nicholas Spencer Joseph Bridger John West, Hubert Farrell, Thomas Reade, Math. Kempe And we do further demand that the said Sir William Berkeley with all the persons in this list be forthwith delivered up or surrender themselves within four days after the notice hereof, or otherwise we declare as follows. That in whatsoever place, house, or ship, any of the said persons shall reside, be hid, or protected, we declare the owners, masters, or inhabitants of the said places to be confederates and traitors to the people and the estates of them is also of all the aforesaid persons to be confiscated. And this we, the commons of Virginia, do declare, desiring a firm union amongst ourselves that we may jointly and with one accord defend ourselves against the common enemy. And let not the faults of the guilty be the reproach of the innocent, or the faults or crimes of the oppressors divide and separate us who have suffered by their oppressions. These are, therefore, in his Majesty’s name, to command you forthwith to seize the persons above mentioned as traitors to the King and country and them to bring to Middle Plantation and there to secure them until further order, and, in case of opposition, if you want any further assistance you are forthwith to demand it in the name of the people in all the counties of Virginia. Nathaniel Bacon General by Consent of the people. William Sherwood Source: "Declaration of Nathaniel Bacon in the Name of the People of Virginia, July 30, 1676,"Massachusetts Historical Society Collections, 4th ser., 1871, vol. 9: 184–87. Attributions Attributions Images courtesy of Wikimedia Commons: https://upload.wikimedia.org/wikipedia/commons/4/4f/Clive.jpg Boundless World History https://www.coursehero.com/study-guides/boundless-worldhistory/england-and-parliamentary-monarchy/ Work based around the ideas of Patricia Seed: Ceremonies of Possession in Europe's Conquest of the New World, 1492–1640
oercommons
2025-03-18T00:35:08.645325
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87889/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87890/overview
Oceania and the Expeditions of James Cook Overview Oceania and the Expeditions of James Cook Far south and east of China is a region dotted with thousands of islands, each as unique as the people who inhabit it: Polynesia, Micronesia, and Melanesia. Mixed into this region are also the larger islands of New Zealand and Australia. These regions together comprise Oceania: the South Pacific region of the Southern Hemisphere. Diverse in its peoples, unmatched in its linguistic diversity, and unique in plants and animals, Oceania is a vast oceanic world of volcanic islands, tropical paradises, extraordinary mountain ranges, desert, and untamed rainforests. For centuries, this region’s isolation prevented it from mass exploration. And yet stories and legends of this almost mystical region spread. Asiatic and European explorers turned their attention to the South Pacific in the seventeenth and eighteenth centuries, unaware of the people and cultures that had conquered continents and claimed islands centuries before. Learning Objectives - Identify the key people, clans, and events to occur in Oceania during the eighteenth century. - Evaluate the expeditions of James Cook. Key Terms / Key Concepts Oceania: name for the part of the South Pacific Ocean in which Australia, New Zealand, Micronesia, Polynesia, and Melanesia lie James Cook: eighteenth-century English explorer and navigator who charted much of the South Pacific Māori: Clans of diverse peoples who inhabit New Zealand Omai: Polynesian man who served as Cook’s right hand in matters related to translation and navigation during his second and third voyages King Kalaniʻōpuʻu: Hawaiian king who initially befriended James Cook James Cook According to legend, it was as a teenager that James Cook first fell in love with the sea. Apprenticed as a sailor in a small merchant fleet, Cook excelled in both the practical and intellectual skills of seamanship. Several years later, he traveled to North America with the British naval fleet in the seminal war against the French: the Seven Years’ War. It was during this time that he turned his attention to cartography and charted the Newfoundland coast. Following the British victory in the Seven Years’ War, Cook’s skills as a marine surveyor and exceptional sailor caught the interest of Britain’s Royal Society. Unlike some of Britain’s more illustrious seamen, Cook lacked bravado but possessed intellect. A combat veteran with a keen eye for cartography, as well as interests in natural history and botany, James Cook was the ideal man to lead Britain’s expeditions to discover the Great South Land. Cook’s First Expedition The Royal Society promoted discovery and knowledge for Britain’s sake. Rumors circulated about early expeditions to the Southern Seas of the Pacific, far in the Southern Hemisphere. In 1768, James Cook set out on his first of three voyages. Ideally, he would discover new islands and water routes for Britain’s expanding empire. Setting sail from Britain in 1768, Cook and his crew reached the Pacific Island of Tahiti in 1769. After resupplying, Cook set forth again in pursuit of the “Great South Land” rumored to be deep in the South Pacific. Charting islands and coastlines as he sailed, Cook followed the pattern of the planet Venus, which was crossing over the sun. In June 1769, Cook and his crew caught their first sight of New Zealand. Four months later, they charted and sailed around the southern island of New Zealand. Portrait of James Cook. The scene with mountains and ships behind Cook’s portrait is suggestive of his landing in New Zealand. Cook and the Māori In October of 1769, Cook’s shipboy, Nicholas Head, spotted land. Briefly, the crew moored. All too soon they encountered the fierce Māori clans. Originating from Polynesian peoples much in the same way that Australia’s Aborigines had, the Māori proved far fiercer. Tall and tattooed, the Māori were warriors who were unpleased with the European encounter. Their violent reception forced Cook to quickly set sail once more. In his fabled ship the Endeavour, Cook sailed to the far northeastern side of New Zealand’s North Island. Landing at Mercury Bay, Cook and his crew received a far different reception from the local Māori. For the first time, the Europeans engaged in trade with the Māori. Peaceful relations ensued, carefully crafted by both the Māori and Cook’s crew. Perhaps the reason for the shift in receptions can be partially attributed to Cook’s right-hand man, Tupaia. A Polynesian man who served as a navigator and a priest, Tupaia was also Cook’s translator. When the Endeavour landed at Mercury Bay, Tupaia also got off the ship and worked to bridge the language barrier between Māori and Cook. When the ship departed, Tupaia shared his knowledge of the winds and currents with Cook, who in turn, held unusual respect for native learning and knowledge. The routes charted by Cook were used extensively by sailors until the 1900s. Cook and the Aborigines Following his stay in New Zealand, Cook sailed up the west coast of Australia, as the first known European to do so. Along his journey, Cook landed at Botany Bay and Port Jackson. Reports of his missions inland documented Europeans first encounter with kangaroos. Moreover, they were the first Europeans to encounter Australia’s Aborigines. James Cook’s impression of the Aborigines is as unique as it is insightful. Cook recorded that the Aborigines seemed “far happier” than Europeans. They were not materialistic but rather connected to everything they needed through the earth and sea. The description provided by Cook showcases him as a more humane man than his predecessors, successors, and contemporaries who demonstrated little respect, if not outright disregard for the Aborigines. But he was also a severe disciplinarian. He did not hesitate to flog his sailors. And tragically, the sympathetic view he expressed toward the Aborigines would not prevail among white Europeans. Cook’s Second Expedition The success of Cook’s first voyage prompted the Royal Society to support him on the second voyage in 1772. This time, he sailed south in search of present-day Antarctica. Rumors flew of a great land covered in ice deep in the South and Cook wanted to find it. His trusted compatriot, Tupaia, had died during the first voyage. This time, Cook employed a dazzling commoner from Polynesia named Omai. Although less knowledgeable than his predecessor, Omai charmed sailors and British citizens alike. Serving as a translator and assistant navigator, he helped Cook chart routes through the South Pacific. After a trying voyage, the crew approached Antarctica, but they fell just short of laying eyes on the great continent. However, Cook’s second voyage brought back significant knowledge of South Pacific Islands such as Vanuatu, Tahiti, Tonga, Tasmania, and the Cook Islands. From an ecological and economic standpoint, Cook’s second voyage is also important because of his experiences in New Zealand. His crew rested and resupplied in the South Islands’ fjords for seven weeks. It was during this period of rest that Cook introduced his crew to “spruce beer,” a beverage packed with vitamin C. Cook quickly administered the drink to his crew to repel the dreaded illness of scurvy, which resulted in muscle weakness and abnormal bleeding. During their stay, Cook and Omai helped win the trust of the Māori: Omai, for his ability to interpret, and Cook for his gifts of seeds, fowl, and pigs to the Māori. The beginning of a cultural exchange and partnership began. Portrait of Omai. Cook’s Third Expedition In 1776, Cook set off for a final voyage. This time, the Royal Society commissioned him to find the rumored “Northwest Passage”: an oceanic link between the Atlantic and Pacific Oceans thought to be somewhere in the North Atlantic. Cook set forth with his trusted companion, Omai, on his third voyage with the goal of securing the route for Britain. No such Northwest Passage existed, but that did not prevent Cook from exploring and charting much of the Alaskan and western Canadian coasts. Exhausted by the efforts, Cook decided to take his crew and ship south for the winter. In 1778, Cook fatefully landed in Hawaii. Cook in the Hawaiian Islands He moored his ship off the coast of Hawaii’s main island. To the delight and astonishment of James Cook, the Hawaiian islanders welcomed them reverentially. What he did not know initially was that their landing coincided with the Hawaiian festival of Makahiki. The festival centered around spiritual activities and games, including the arrival of the all-important deity Lono. Cook, who was approaching fifty years old, was welcomed with open arms to the island and decorated in ceremonial dress. Reports flew that he and the Hawaiian, King Kalaniʻōpuʻu, had formed a close friendship. For a month, the islanders supplied him and his crew with food and other provisions. But not all islanders welcomed the prolonged European presence. Many felt that the Europeans took advantage of their hospitality and resources. When Cook finally departed on February 4 1780 his ship was laden with supplies. A week later, storms forced Cook’s ships to return to the Hawaiian Islands for repairs. Confused by the reappearance of this man and his crew, the Hawaiians received Cook far less warmly than previously. They viewed their visitors with a suspicion that intensified when the hungry Europeans sought supplies from the Hawaiians again, and hostility between the Hawaiians and the Europeans increased over the successive days. At one point, a ship was stolen from Cook’s fleet. To negotiate for its return, Cook fatefully attempted to take King Kalaniʻōpuʻu as a hostage on one of his ships. The Death of James Cook. Cook’s attempt at kidnapping their king demonstrated to the Hawaiians that he was their enemy. As James Cook walked through the surf with the Hawaiian king towards his ship, a large group of Hawaiians approached him. One struck him violently on the back of the head, knocking him into the surf. Almost immediately after, Cook was stabbed to death. His body was hoisted by the Hawaiians who later disemboweled, baked, and divided his body. Out of respect to him, they returned a few of his remains to the crew for sea burial. But of the crew who set out with Cook in 1776, only a handful would complete his third voyage and return to Britain, only to report that no Northwest Passage existed. Cook’s Legacies James Cook’s legacy remains complex and complicated. He was, undoubtedly, among the greatest marine surveyors and navigators in history. Equally undisputed is that he demonstrated greater compassion for the local peoples of Polynesia, Australia, and New Zealand than his successors would. His early interactions with the Māori people in New Zealand particularly began a long network of collaboration. But his violent death in the Hawaiian Islands also stripped away European ideas of Oceania as a second Eden. In place of the romantic ideas emerged a collective European impression of Oceania as a place of grim human existence riddled with violence. That legacy would carry forth into the twentieth century with disastrous consequences for Oceania. Cook’s explorations brought together European and Oceanic worlds. British citizens remained fascinated by Cook’s voyages and by Polynesian navigator, Omai. However, Cook’s brutal death at in Hawaii caused European attitudes to turn critical of Oceania and its peoples. No longer did that region of the world seem like a magical, timeless Eden full of unusual plants and animals, and charmingly quaint peoples. Instead, it appeared as dark and ominous, and in need of conquering. Attributions Images from Wikimedia Commons Matsuda, Matt K. Pacific Worlds: A History of Seas, Peoples, and Cultures. Cambridge University Press, Cambridge: 2012. 136-41. Insight Guides: New Zealand, Ed. Le Bas, Tom. Langenscheidt Publishers, Inc. Long Island City, NY. 2009. 30-32. Welsh, Frank. Australia: A New History of the Great Southern Land. The Overlook Press, New York: 2004. 22.
oercommons
2025-03-18T00:35:08.672407
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87890/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87891/overview
Impact of European Settlement in Australia, New Zealand, and the Pacific Overview Impact of European Settlement in Australia, New Zealand, and the Pacific James Cook’s expeditions to Oceania brought Europeans into contact with the Māori and Aborigine peoples of New Zealand and Australia. His voyages also brought knowledge of the far side of the world back to England and Western Europe. From the late eighteenth century forward, Western Europeans sought ways to settle, develop, and exploit the resources, countries, and islands in Oceania. Learning Objectives - Investigate the legacies of James Cook’s voyages into the South Pacific/Australia/New Zealand. - Evaluate European interactions with Māori and Aborigine groups. Key Terms / Key Concepts Pacific-Exchange: exchange of goods, resources, and disease between Europe and the South Pacific Treaty of Waitangi: 1840 treaty that signed much of Māori lands in New Zealand over to the English New Zealand Wars: series of conflicts between English soldiers and the Māori peoples over land ownership in the nineteenth century Penal colony: a colony established by a parent country for the purpose of exiling prisoners Australian Gold Rush: a series of gold rushes in Australia in the 1850s James Cook’s Early Expeditions in Oceania James Cook’s explorations brought together European and Oceanic worlds. While British citizens remained fascinated by Cook’s Polynesian navigator, Omai, and Cook himself was regarded as having unusual respect for indigenous peoples he encountered, European attitudes quickly turned critical of Oceanic peoples and cultures. During the nineteenth century, a peculiar exchange system arose between the South Pacific territories and western Europe. Although less well-defined than its Atlantic counterpart, this exchange system can loosely be called the Pacific Exchange because food, technology, cultural features, and diseases were transported between Europeans and the Māori and Aborigine peoples of Australia and New Zealand. The British Presence in New Zealand Of the groups encountered by Cook, it is the Māori of New Zealand whose relationship with the growing British presence was the most mixed. For unlike Australia, and other less remote regions of the British Empire, New Zealand was largely left alone until the mid-1800s. The British came and went sporadically, not officially colonizing New Zealand until 1840. Until the Napoleonic Wars in the early 1800s, Britain largely ignored the New Zealand islands, which they had loosely claimed as an extension of “British Australia.” Only when the need for oil rose sharply did the British remember the islands far south in the Pacific. British sailors and whalers arrived in scores on New Zealand’s shores. Initially, some of the Māori worked with the British. Not only joining their whaling crews but also serving as seal-hunters. And oil was sent in vast quantities back to Britain. As the British connection to New Zealand grew, so too did the British need to civilize the Māori according to western traditions. Unsurprisingly, missionaries arrived to preach the Gospel and convert the Māori. Under British minister Samuel Marsden, missionaries in New Zealand also taught Māori their skills in carpentry, farming, and European technology. The Māori later used these skills, and their knowledge of the oceanic weather and current patterns, to develop a commercial exporting business. Meeting of white settlers and Māori peoples in 1863. Trouble between the British and the Māori Initial troubles between Māori and Europeans arose over land claims. Anxious to formally establish themselves as the colonizers of New Zealand, the British created the Treaty of Waitangi. Although never formally ratified, it was signed by Captain William Hobson and, reportedly, dozens of Māori clan leaders. Chiefly, the treaty signed over much of New Zealand’s land to the British, recognizing them as colonizers. For a culture that was communal and possessed no concept of private property, the Treaty of Waitangi confused Māori people, who in turn, hoped the British would respect their rights. Within five years, the first skirmishes between Māori and British immigrants erupted as the British failed to respect Māori customs and land. In 1865, just as the American Civil War ended, a series of prolonged, often stalemated wars erupted between Māori and British New Zealanders. Collectively called the New Zealand Wars, these conflicts were fought over land ownership. Exhausted by lack of food and resources, the Māori capitulated in 1872. A few years after the conclusion of the New Zealand Wars, the country emerged as one of the most progressive in the world. Public education was required for Māori and British children. Two years later, white and Māori men were given the right to vote. The rising sheep industry also saw Māori working with their British counterparts. And in 1907, the country’s British population grew so large that they were given the title “Dominion of New Zealand” and considered a part of Britain’s ever-important, ever-growing empire. Unfortunately, New Zealand’s glorious rise was threatened as war clouds threatened, and the country approached the outbreak of World War I in 1914. The British in Australia Australia’s journey from the eighteenth to twentieth centuries stands in stark contrast to New Zealand’s relatively progressive rise. Several reasons exist for the difference. Climatologically, New Zealand offered British immigrants a much pleasanter environment than Australia. With green rolling hills, sharp mountain ranges, and cold sparkling ports, New Zealand was reminiscent of northern England and Scotland. Contrastingly, Australia was a massive continent riddled with highly venomous snakes, massive crocodiles, spiders, savage coastlines, and the unrelenting heat of the sunburned “outback.” Nothing about this baked continent felt familiar to British immigrants. Moreover, Australia was home to hundreds of thousands of Aborigines. These nomadic people initially intrigued, and later repulsed, white Australians. Unlike New Zealand’s Māori people, the Aborigines were not fierce warriors and were not interested in the white Europeans. They were, however, very territorial and not prone to sharing land with the newcomers. Australia as a Penal Colony The most significant difference stems, however, in Australia’s history as a British colony from its original purpose. After losing their North American colonies during the American Revolutionary War, Britain sought new colonies for their non-violent criminals, many of whom were in debtor’s prisons. Australia became the ideal location. Halfway around the world from Britain, Australia had a hostile environment. And in British eyes, no one else had claimed the country. It provided them with the perfect, new penal colony to send criminals and debtors during the late eighteenth and nineteenth centuries. The British answer to their criminal and debtor population proved disastrous from its onset. Many of the immigrants suffered from diseases obtained by living in tight, dirty quarters in the ships for months on end. Not infrequently, women arrived malnourished and pregnant for they shared quarters with men aboard ships, and frequently once they landed. Criminals who arrived often were city-folk with no understanding of farming in a temperate climate, much less one so foreign. Malnourishment and starvation prevailed. Those who survived bore witness to violence, theft, and general chaos. Conditions improved, especially for women, only after twenty-five years of struggle. In the early 1800s, special facilities were built for women who worked as indentured servants, carrying out the duties of sewing, caregiving, spinning, and small-agricultural work. Strife between white Australians and Aborigines As the sheep industry started to gain hold in Australia, conditions improved in many ways for white Australians. Ranchers learned to manage their flocks for the most part. What they did not learn to manage were their Aboriginal neighbors. Initial curiosity soon gave way to hostility as sheep left the enclosures and ranges of white Australians and migrated onto Aboriginal lands. Frustrated by the encroachment, Aborigines frequently caught (and ate) the sheep, or stole them. Anger arose and the general attitude between cultures remained frustrated and hostile. Often, the white Australians retaliated ten-fold when an Aborigine committed a crime against them. Benefitting from European muskets and later, rifles, they frequently murdered an entire Aborigine family for the crime of an individual. A much more sinister foe than military technology arrived with the white Australians. Like the diseases which had accompanied Cortez and Pizarro in their conquests of the Americas, white Australians brought new diseases to Australia. Smallpox, venereal diseases, tuberculosis, cholera, and flu decimated the Aborigine populations. Tension between Aborigine and white Australian populations remains even in the twenty-first century. Prosperity comes to Australia Prosperity did come to some fortunate white Australians during the 1850s during the Australian Gold Rush. For most Australians though, life was a cycle of isolation, small-time sheep farming, and severe weather. Then in 1901, the Australian Commonwealth was formed as part of the British Empire. It seemed that Australia had arrived, even if they continued to resent and persecute their Aboriginal neighbors. Their export industry thrived even as their cities grew in splendor and sophistication, especially Melbourne. White Australians had arrived on the continent under arduous situations, many as convicts. By the turn of the twentieth century though, their tenacity had transformed Australia’s coastal regions. For the most part, they enjoyed their isolation and Britain largely ignored its former penal colony. Until the stirrings of the First World War arose. In that moment, Australia was not only remembered, but earned their mettle as an important player in world affairs. Sydney, nineteenth century. Attributions Images courtesy of Wikimedia Commons. Matsuda, Matt K. Pacific Worlds. Cambridge University Press, 2012. 165-66. Welsh, Frank. Australia: a New History of the Great Southern Land. Overlook Press, 2006. 44. Insight Guides: New Zealand. Langenscheidt Publishers, Inc. Long Island City, NY. 2009. 34-42.
oercommons
2025-03-18T00:35:08.695017
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87891/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87892/overview
Early Modern Northeast Asia Overview Early Modern Northeast Asia During the early modern period, roughly 1400 to 1800, the traditional societies of northeast Asia faced a number of foreign and domestic challenges that were part of economic, political, social, and technological modernization around the world. These challenges included indigenous imperial competition among the Chinese, Manchurian, and Russian empires; interactions with European explorers, missionaries, and traders; and various proto-industrial technological and organizational advances. The societies of northeast Asia responded to these challenges in a variety of ways, including with numerous economic, political, diplomatic, and social changes that tests their adherence to their own traditions. Learning Objectives - Identify and assess the forms, effects, and repercussions of east Asian contacts with Europeans between 1200 and 1800. - Identify and analyze differences and similarities of the responses of east Asian states toward Europeans, along with accompanying attitudes toward Europeans. - Examine the impact of east Asia and Europe on each between 1400 and 1800. - Examine the impact of Early Modern China on other Asian nations, particularly Korea and the southeast Asian states. - Identify and assess the responses Korea and the southeast Asian states to the continuing Chinese presence across east Asia between 1400 and 1800. Key Terms / Key Concepts Tokugawa Shogunate - the last feudal Japanese military government, ruling from 1603 to 1867, the end of which paved the way for the Meiji Restoration and the modernization of Japan Early Modern Northeast Asia The societies of northeast Asia included Japan, Korea, and Manchuria, with China and Russia as peripheral imperial powers. Geographically Japan and Korea were on the periphery of northeast Asia. Each was also affected by the mercantile and religious intrusions of European maritime imperial powers, including England, France, the Netherlands, Portugal, and Spain. The peoples of northeast Asia were subject to two overlapping sets of imperial competitors: China, Manchuria, and Russia from within the region and the European powers along the coastlines of northeast Asia and the Japanese islands. Among the Chinese, Manchu, and Russian empires, China was the foundational power of east Asia. Historically the succession of Chinese dynasties from the Han Dynasty to the present Communist Party had taken the initiative periodically in the imperial competition for east and central Asia. Various Han rulers had attempted imperial boundaries into central and southeastern Asia. A number of Tang rulers continued these efforts in the same directions. Such efforts alternated with retrenchment to protect core areas of the empire. The Ming and Qing Dynasties of China Although China was still a world power and the major regional power in east Asia during the early modern period, it experienced a steady decline in both respects during the Ming and the Qing Dynasties. This decline, however, was not perceptable to most at the beginning of the Ming Dynasty. From the inception of the Ming Dynasty in 1368 into the mid-fifteenth century China also pursued expansion, which belied any signs of decline for China during the fourteenth and the fifteenth centuries. This expansion included seven voyages by Zheng He into the western Pacific and the Indian Oceans from 1405 to 1433. Zheng He’s expeditions sought to expand Chinese trade and influence. Emperor Yongle initiated these expeditions as part of his larger program for Ming expansion. It is possible that Chinese mariners also made it to the Pacific coast of the Americas, although evidence is not conclusive. From the mid-fifteenth century Ming officials discontinued these efforts in favor of a new focus on domestic priorities, including defense. This change in policy intersected with other developments that marked the gradual decline and eventual fall of the Ming Dynasty. In 1644 the Manchus overthrew the Ming Dynasty and established the Qing Dynasty. This new dynasty was part of the rise of Manchu power in northeast Asia. Manchuria is north east of China and north of the Korean peninsula, and the Manchurian people are culturally distinct from the Chinese.During the latter half of the seventeenth and the eighteenth centuries the Manchus competed with and expanding Russian empire and ambitious European maritime powers for influence across and control over northeast Asia. During this period the Manchurian empire shared with China status as a premier power in east and northeast Asia, having ruled China through the Qing Dynasty. However, from the late nineteenth century Manchu power in this region waned, due to the growing power of Russia and Japan. Russian power gradually expanded toward and then across northeastern Asia beginning in the late seventeenth century. And Manchuria could not compete with Russian resources, as Russia expanded eastward across the northern half of Asia. From the fourteenth into the eighteenth centuries Japan and Korea were subject to the imperial ambitions of China and then Manchuria. By the late eighteenth century Russia was also asserting itself into Japanese and Korean affairs. In addition, European traders and missionaries had been travelling to northeast Asia by way of the Pacific Ocean since the sixteenth century. Historically both the Japanese and the Koreans had been subject to Chinese influence since at the Han Dynasty. Korea The Koreans were, possibly, the northeast Asian people most affected by and vulnerable to Chinese, Manchu, and Russian imperialism. Despite the nickname of the Hermit Kingdom, the history of the Korean peninsula had not been marked by isolation. Throughout its history various Korean states had interacted with China, Mongolia, Manchuria, and, from the sixteenth century, European maritime powers. At various times Koreans embraced, adapted, and resisted cultural, economic, political, and religious influence from these peoples. Among these peoples the Korean relationship with China was the most significant. That relationship has been defined by Chinese influence and power over Korea. As qualified by the geography of the Korean peninsula, Chinese forces would have the advantage in an invasion of the Korean peninsula. Accordingly, the succession of Korean states throughout history have had to accept Chinese power and influence. However, the geography of the Korean peninsula makes such an invasion problematic for Chinese forces. This juxtaposition of Chinese power and Korean geography have fostered a tension that continues to define the relationship between these two countries all the way to the present, as manifest in China’s present vulnerability to the possible use of nuclear weapons by the North Korean government under Kim Jong-un. Tokugawa Japan During this period Japan underwent political centralization with the emergence of the Tokugawa Shogunate. In Japanese history shoguns were military overlords who controlled Japan in a feudal structure. The Tokugawa Shogunate came to power through the efforts of a succession of three military leaders during the late sixteenth century. After taking control of Japan the Tokugawa Shogunate put into place a number of restrictions which prohibited all forms of contact with all the European powers except the Dutch. Shogunate leaders feared that European influence, including that of Christian missionaries, threatened their control. Through these restrictions the Shogunate also sought to preserve traditional Japanese society. Being an archipelago Japan was better able to hold off Chinese, Manchurian, and Russian expansion. During the thirteenth century storms and the difficulties of crossing the Sea of Japan had protected Japan from conquest by Khubilai Khan’s Mongol forces. At that time Khan controlled China through his short-lived Yuan Dynasty. During the first half of the seventeenth century the new, indigenous Tokugawa Shogunate had progressively closed off Japan to almost all foreign contacts, particularly with Russia and the European powers. The Tokugawa Shogunate, which ruled Japan from 1603 to 1867, isolated Japan from the outside world to protect its control over Japan and to preserve Japanese culture. Northeast Asia at the End of the Early Modern Period By the mid-nineteenth century China, Japan, Korea, and Manchuria were increasingly subject to European incursions in the forms of trade, religious missionaries, and military control. Each tried its own strategy in response to these intrusions, with widely varying degrees of success. Primary Source: Text of the Sakoku Edict Text of the Sakoku (Closed Country) Edict of June 1636 1. No Japanese ships may leave for foreign countries. 2. No Japanese may go abroad secretly. If anybody tries to do this, he will be killed, and the ship and owner/s will be placed under arrest whilst higher authority is informed. 3. Any Japanese now living abroad who tries to return to Japan will be put to death. 4. If any Kirishitan believer is discovered, you two (Nagasaki bugyo) will make a full investigation. 5. Any informer/ revealing the whereabouts of a bateren will be paid 200 or 300 pieces of silver. If any other categories of Kirishitans are discovered, the informer/s will be paid at your discretion as hitherto. 6. On the arrival of foreign ships, arrangements will be made to have them guarded by ships provided by the Omura clan whilst report is being made to Yedo, as hitherto. 7. Any foreigners who help the bateren or other criminal foreigners will be imprisoned at Omjra as hitherto. 8. Strict search will be made for bateren on all incoming ships. 9. No offspring of southern Barbarians will be allowed to remain. Anyone violating this order will be killed, and all relatives punished according to the gravity of the offence. 10. If any Japanese have adopted the offspring of southern Barbarians they deserve to die. Nevertheless, such adopted children and their foster-parents will be handed over to the Southern Barbarians for deportation. 11. If any deportees should try to return or to communicate with Japan by letter or otherwise, they will of course be killed if they are caught, whilst their relatives will be severely dealt with, according to the gravity of the offence. 12. Samurai are not allowed to have direct commercial dealings with either foreign or Chinese shipping at Nagasaki. 13. Nobody other than those of the five places (Yedo, Kyoto, Osaka, Sakai and Nagasaki) is allowed to participate in the allocation of ito-wappu. 14. Purchases can only be made after the ito-wappu is fixed. However, as the Chinese chips are small, you will not be too rigorous with them. Only twenty days are allowed for the sale. 15. The twentieth day of the ninth month is the deadline for the return of foreign ships, but latecomers will be allowed fifty days grace from the date of their arrival Chinese ships will be allowed to leave a little after the departure of the (Portuguese) galliots. 16. Unsold goods cannot be left in charge of Japanese for storage or safekeeping. 17. Representatives of the five (shogunal) cities should arrive at Nagasaki not later that the fifth day of the long month. Late arrivals will not be allowed to participate in the silk allocation and purchase. 18. Ships arriving at Hirado will not be allowed to transact business until after the nineteenth day of the fifth month of the thirteenth year of Kwanei (June 22, 1636) From University of Pittsburgh Translation from C.R. Boxer. The Christian Century in Japan. Attributions Licenses and Attributions CC LICENSED CONTENT, SHARED PREVIOUSLY - Curation and Revision. Provided by: Boundless.com. License: CC BY-SA: Attribution-ShareAlike CC LICENSED CONTENT, SPECIFIC ATTRIBUTION - Title Image - 15th century portrait of Korean King Taejo. Attribution: Unknown authorUnknown author, Public domain, via Wikimedia Commons. Provided by: Wikipedia. Location: https://commons.wikimedia.org/wiki/File:%EC%A1%B0%EC%84%A0_%ED%83%9C%EC%A1%B0.JPG. License: CC BY-SA: Attribution-ShareAlike - History of Korea. Location: https://en.wikipedia.org/wiki/History_of_Korea. License: CC BY-SA: Attribution-ShareAlike - Terakoya. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Terakoya. License: CC BY-SA: Attribution-ShareAlike - History of Japan. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Shogun. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Edo period. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Edo society. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Tokugawa Ieyasu. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Battle of Sekigahara. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Sakoku. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Tokugawa shogunate. Provided by: Wikipedia. License: CC BY-SA: Attribution-ShareAlike - Tokugawa_Ieyasu2.JPG. Provided by: Wikimedia Commons. License: Public Domain: No Known Copyright
oercommons
2025-03-18T00:35:08.724927
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87892/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87893/overview
The Development of the African Transatlantic Slave Trade Overview The Development of the African Transatlantic Slave Trade The advance of a world-wide market economy and capitalism provided a framework for the development of Transatlantic trade and the slave trade. Learning Objectives - Describe the factors that led to the development of the African Slave Trade in Europe, Americas, and Africa. Key Terms / Key Concepts - Mercantilism: an economic system consisting of a royal government controlling colonies abroad and overseeing trade and land-holdings at home. (The ultimate example of this system was the biggest owner of colonies that produced bullion: Spain.) - Triangle Trade: a trading system between Africa, the Americas, and Europe (Slaves from Africa were shipped to the New World to work on plantations. Raw goods—e.g. sugar, tobacco, cotton, coffee—were processed and shipped to Europe. Finished and manufactured goods were then shipped to Africa to exchange for slaves.) - Indentured Servants: Europeans who worked as slaves in the New World under contact for 4-7 years typically in exchange for passage across the Atlantic - Asiento System: direct slave trading contracts between the Spanish government and European merchants to sell slaves within the Spanish Empire in Latin America (This system broke up the Portuguese slave trade monopoly after 1580. The Dutch took advantage of these contracts to compete with the Portuguese and Spanish for direct access to African slave trading, and the British and French eventually followed.) Prelude to Trade Empires and Early Capitalism European society underwent a major change during the early modern period with regards to its outlook on wealth and property. Along with that change came the growth of a new kind of state and society, one not only defined by the growth of bureaucracy seen in absolutism but also in the power of the moneyed classes whose wealth was not predicated on owning land. The rise of that class to prominence in certain societies, especially those of the Netherlands and England, accompanied the birth of the most distinctly modern form of economics: capitalism. In the Middle Ages, wealth, land, and power were intimately connected. Nobles were defined by their ownership of land and by their participation in armed conflict. That changed by the early modern period, especially as it became increasingly common for monarchs to sell noble titles to generate money for the state. By the seventeenth century the European nobility were split between “nobles of the sword”—who inherited their titles from their warlike ancestors— and “nobles of the robe”—who had either been appointed by kings or purchased titles. Both categories of nobility were far more likely to be owners of land who exploited peasants than to be warriors. Among almost all of them, there was considerable contempt for merchants, who were often seen as parasites who undermined good Christian morality and the proper order of society. Even nobles of the robe, who had only joined the nobility within the last generation, tended to cultivate a practiced loathing for mere merchants, who they felt were socially inferior. In addition, the economic theory of the medieval period posited that there was a finite, limited amount of wealth in the world, and that the only thing that could be done to become wealthier was to get and hold on to more of it. In the medieval and even Renaissance-era mindset, the only forms of wealth were land and bullion (precious metals), and since there is only so much land and so much gold and silver out there, if one society grew richer, by definition every other society grew poorer. According to this finite resource mindset, kingdoms could only increase their wealth by seizing more territory, especially territory that would somehow increase the flow of precious metals into royal coffers. Trade was only important insofar as trade surpluses with other states could be maintained, thereby ensuring that more bullion was flowing into the economy than was flowing out. Colonies abroad provided raw materials and bullion itself. As a whole, this concept was called mercantilism: an economic system consisting of a royal government controlling colonies abroad and overseeing landholdings at home. The ultimate example of this system was the biggest owner of colonies that produced bullion: Spain. Mercantilism worked well enough, but commerce fit awkwardly into its paradigm. Trade was not thought to generate new wealth, since it did not directly dig up more silver or gold, nor did it seize wealth from other countries. Trade did not "make" anything according to the mercantilist outlook. Of all classes of society, bankers in particular were despised by traditional elites since they not only did not produce anything themselves but also profited off of the wealth of others. These attitudes started undergoing significant changes in the sixteenth and seventeenth centuries, mostly as a result of the incredible success of overseas corporations—groups that generated enormous wealth outside of the auspices of mercantilist theory. Many of the beneficiaries of the new wealth of the sixteenth and seventeenth centuries were not noblemen; they were instead wealthy merchant townsfolk, especially in places like the Dutch Republic and, later, England. These were men who amassed huge fortunes but did not fit neatly into the existing power structure of landholding nobles, the church, and the common people. These changes inspired an increasingly spirited battle over the rights of property, spurring the idea that not just land but wealth itself was something that the state should protect and encourage to grow. Early Capitalism The growth of commercial wealth was closely tied to the growth of overseas empires. The initial wave of European colonization (mostly in the Americas) had been driven by a search for gold and a desire to convert foreigners to Christianity. However, European powers came to pursue colonies and trade routes in the name of commodities and the wealth they generated by the seventeenth century. In this period of empire-building, European states sought additional territory and power overseas primarily for economic reasons. Because of the enormous wealth to be generated from not only gold and silver but also from commodities like sugar, tobacco, and coffee (as well as luxury commodities like spices that had always been important), the states of Europe were willing to war constantly among themselves as well as to perpetrate one of the greatest crimes in history: the Atlantic Slave Trade. In short, the seventeenth and eighteenth centuries the first phase of a system that would later be called capitalism arises— an economic system in which the exchange of commodities for profit generated wealth to be reinvested in the name of still greater profits. In turn, capitalism is dependent on governments that enforce legal systems that protect property and, historically, by wars with rivals that tried to carve out bigger chunks of the global market. To reiterate, capitalism was (and remains) a combination of two major economic and political phenomena: enterprises run explicitly for profit and a legal framework to protect and encourage the generation of profit. The pursuit of profit was nothing new, historically, but the political power enjoyed by merchants, the political focus on overseas expansion for profit, and the laws enacted to encourage these processes were new. Overseas Expansion in the Seventeenth and Eighteenth Centuries The development of early capitalism was intimately connected with overseas expansion. Europe was an important center of a truly global economy by the seventeenth century, and it was that economy that fueled the development of capitalistic, commercial societies in places like the Netherlands and England. While the original impulse behind overseas expansion during this period was primarily commercial—focusing on the search for commodities and profit, it was also a major political focus of all of the European powers by the eighteenth century. In other words, European elites actively sought not just to trade with overseas territories but also to conquer and control, both for profit and for their own political "glory" and aggrandizement. The result was a dramatic expansion of European influence or direct control in areas of the globe in which Europe had never before had an influence. In result, by 1800 roughly 35% of the globe was directly or indirectly controlled by European powers. Military technology and organization were key factors in this European global expansion. The early-modern military revolution (i.e. the evolution of gunpowder warfare during and after the Renaissance period) resulted in highly-trained soldiers with the most advanced military technology in the world by the late seventeenth centuries. As European powers expanded, they built fortresses in the modern style and defended them with cannons, muskets, and warships that often outmatched the military forces and technology they encountered. In the case of China, Japan, and the Philippines, for instance, local rulers learned that the easiest way to deal with European piracy was not to try to fight European ships, but instead to cut off trade with European merchants until restitution had been paid. European states also benefited from the relative political fragmentation of parts of the non-European world. There were powerful kingdoms and empires in Africa, the Middle East, and Asia that defied European attempts at hegemony, but much of the world was controlled by smaller states. A prime example is India. This region had become divided into dozens of small kingdoms, along with a few larger ones due to the decline of the Mughal Empire by the early eighteenth century. When the British and French began taking control of Indian territory, it was against the resistance of small Indian kingdoms, not some kind of (nonexistent) overall Indian state. An important note regarding European colonial power: this period saw the consolidation of European holdings in the New World and the beginning of empires in places like India, but it did not include major landholdings in Africa, the Middle East, or East Asia. In places with powerful states—China, the Ottoman Empire, and Japan—even the relative superiority of European arms was not sufficient to seize territory. Likewise, not only were African states able to successfully fight off Europeans as well, but African diseases made it impossible for large numbers of Europeans to colonize or occupy much African territory. As the Slave Trade burgeoned, Europeans did launch slave raids, but most slaves had been captured by African slavers who enjoyed enormous profits from the exchange. Likewise, European states and the corporations they supported worked diligently to establish monopolies on trade with various parts of the world. However, "monopolies" in this case only meant monopolies in trade going to and from Europe. There were enormous, established, and powerful networks of trade between Africa, India, South Asia, Southeast Asia, China, Japan, and the Pacific, all of which were dominated by non-European merchants. To cite one example, the Indian Ocean had served as an oceanic crossroads of trade between Africa and Asia for thousands of years. Europeans broke into those markets primarily by securing control of goods that made their way back to Europe rather than seizing control of intra-Asian or African trade routes, although they did try to dominate those routes when they could, and Europeans were able to seize at least some territories directly in the process. The Netherlands The Dutch were at the forefront of these changes. During their rebellion against Spain in the late sixteenth century, the Dutch began to look to revenue generated from trade as an economic lifeline. They served both as the middlemen in European commerce, shipping and selling things like timber from Russia, textiles from England, and wine from Germany. They also increasingly served as Europe’s bankers. The Dutch invented both formalized currency exchange and the stock market, both of which led to huge fortunes for Dutch merchants. A simple way to characterize the growth of Dutch commercial power was that the Netherlands replaced northern Italy as the heart of European trade after the Renaissance. In 1602, Dutch merchants, with the support of the state, created the world's first corporation: the Dutch East India Company (VOC in its Dutch acronym). It was created to serve as the republic's official trading company—a company with a legal monopoly to trade within a certain region: India and Southeast Asia. The VOC proved phenomenally successful in pushing out other European merchants in the Indies, through a combination of brute force and the careful deployment of legal strategies. A common approach was to offer “protection” from the supposedly more rapacious European powers, like Portugal, in return for trade monopolies from spice-producing regions. In many cases, the VOC simply used the promise of protection as a smokescreen for seizing complete control of a given area, especially in Indonesia which eventually became a Dutch colony. In other areas local rulers remained in political control but lost power over their own spice production and trade. For the better part of the seventeenth century, the Dutch controlled an enormous amount of the hugely profitable trade in luxury goods and spices from the East Indies as a result. The profits for Dutch merchants and investors were concomitantly high. As an example, above and beyond direct profits by individual members of the company, all stockholders in the VOC received dividends of 30% on their investments within the first ten years, in addition to a dramatic boost in value of the stocks themselves. The other states of Europe were both aghast at Dutch success and grudgingly admiring of it. In 1601, there were 100 more Dutch ships in the port of London at any given time than there were English ships, and by 1620 about half of all European merchant vessels were Dutch. In 1652, the Dutch seized control of the Cape of Good Hope at the southern tip of Africa, allowing them to control shipping going around Africa in route to Asia. They also exerted additional military force in the Indies to force native merchants to trade only with them and not other Europeans. The Dutch takeover of the Cape of Good Hope was the historical origin of the modern nation of South Africa; they were the first permanent European settlers. The Dutch were also the only European power allowed to keep a small trading colony in Japan, which was otherwise completely cut off to westerners after 1641 (thanks to a failed Portuguese-sponsored Christian uprising against the Japanese shogun). The iconic moment in the history of the Dutch golden age of early capitalism was the tulip craze of the 1620s – 1630s. Tulips grow well in the Netherlands and had long been cultivated for European elites. A tulip fad among Dutch elites in the 1620s drove up the price of tulip bulbs dramatically. Soon, enterprising merchants started buying and selling bulbs with no intention of planting them or even selling them to someone who would; they simply traded the bulbs as a valuable commodity unto themselves. In 1625, one bulb was sold for 5,000 guilders, about half the cost of a mansion in Amsterdam. However, the real height of the craze was the winter of 1636 – 1637, when individual bulbs sometimes changed hands ten times in a day for increasing profits. This was the equivalent of “flipping” bulbs; it had nothing to do with the actual tulips any longer. The element to emphasize is not just the seemingly irrational nature of the boom, but of the mindset: the Dutch moneyed classes were already embracing speculative market economies, in which the value of a given commodity has almost nothing to do with what it does, but instead from what people are willing to spend on it. In capitalist economies this phenomenon often leads to "bubbles" of rising values that then eventually collapse. In this case, the tulip craze did indeed come crashing down in the winter of 1637 – 1638, but in the meantime, it presaged the emergence of commodity speculation for centuries to come. The development of this early form of capitalism unquestionably originated in the Netherlands, but it spread from there. One by one, the other major states of Europe started to adopt Dutch methods of managing finances: sophisticated accounting, carefully organized tax policy, and an emphasis on hands-on knowledge of finances up to the highest levels of royal government. For example, Louis XIV insisted that his son study political economy and Colbert, Louis’ head of finance, wrote detailed instructions on how a king should oversee state finances. This was a significant change, since until the mid-seventeenth century at the earliest, to be a king was to refuse to dirty one’s hands with commerce. It was because of the incredible success of the Dutch that kings and nobles throughout Europe began to change their outlooks and values. Ultimately, at least among some kings and nobles in Western Europe, humanistic education and the traditional martial values of the nobility were combined with practical knowledge, or at least appreciation of mercantile techniques. Ultimately, the Dutch Golden Age was the seventeenth century. When the Netherlands was dragged into the wars initiated by Louis XIV toward the end of the seventeenth century, it spelled the beginning of the end for their dominance. The other states of Europe began to focus their own efforts on trade and were able to surpass Dutch efforts, although not their prosperity as the Netherlands has remained a resolutely prosperous country ever since. During that period, however, the Dutch had created a global trade network, proved that commercial dominance would play a crucial factor in political power in the future, and overseen a cultural blossoming of art and architecture. Britain and the Slave Trade Of the other European states, the British were the most successful at imitating the Dutch. In 1667 the British king Charles II officially designated the royal treasury as the coordinating body of British state finances and made sure it was overseen by officials trained in the Dutch style of political economy. The British parliament grew increasingly savvy with financial issues as well, having numerous debates about the best and most profitable use of state funds. In 1651, both to try to seize trade from the Dutch and to fend off Britain's traditional enemies—France and Spain, parliament passed the English Navigation Acts, which reserved commerce with English colonies for English ships. This, in turn, led to extensive piracy and conflict between the powers of Europe in their colonial territories, as they tried to seize profitable lands and enforce their respective monopolies. Ultimately, the British fought three wars with the Dutch, defeating them each time and, among other things, seizing the Dutch port of New Amsterdam in North America (which the English promptly renamed New York). Britain also fought Spain in both the seventeenth and eighteenth centuries, ultimately acquiring Jamaica and Florida as colonies. In terms of trade, the major prize, at least initially, was the Caribbean, due to its suitability for growing sugar. Sugar quickly became the colonial product, hugely valuable in Europe and relatively easy to cultivate compared to exotic products like spices, which were only available from Asian sources. And it was ultimately the profits of sugar that helped bankroll the British growth in power in the seventeenth and, especially, the eighteenth centuries. During this period, sugar consumption in Europe doubled every 25 years. The only efficient way to grow sugar was through proto-industrialized plantations with rendering facilities built to extract the raw sugar from sugar cane. That, in turn, required an enormous amount of back-breaking, dangerous labor. Most Native American slaves quickly died off or escaped and hence the Atlantic Slave Trade between Africa and the New World began in earnest by the early seventeenth century. The Slave Trade between Africa and the New World was, quite simply, one of the worst injustices of human history. Millions of people were ripped from their homeland, transported to a foreign continent in atrocious conditions, and either worked to death or murdered by their owners in the name of "discipline.” The contemporary North American perception of the life of slaves—that of incredibly difficult but not always lethal conditions of work—is largely inaccurate because only a small minority of slaves were ever sent to North America. The immense majority of slaves were instead sent to the Caribbean or Brazil, both areas in which working conditions were far worse than the (still abysmal) working conditions present in North America. Sugar was the major crop of the Caribbean and one of the major crops of Brazil. And the average life of a slave once introduced to sugar cultivation was seven years before he or she died from exhaustion or injury. In sum, most slaves were sent to be worked to death on sugar plantations. The slave trade was part of what historians have described as the “triangle trade” between Africa, the Americas, and Europe. Slaves from Africa were shipped to the New World to work on plantations. Raw goods—e.g. sugar, tobacco, cotton, coffee—were processed and shipped to Europe. Finished and manufactured goods were then shipped to Africa to exchange for slaves. This cycle of exchange grew decade-by-decade over the course of the seventeenth and eighteenth centuries. The leg of the triangle trade that connected Africa and the Americas was known as the Middle Passage because slave ships went directly across the middle of the Atlantic, most traveling to Brazil or the Caribbean, as noted above. Slaves on board ships were packed in so tightly they could not move for most of the voyage, with slave ship captains calculating into their profit margins the fact that a significant percentage of their human cargo would die in route. Over a million slaves died in the seventeenth and eighteenth centuries as a result of the Middle Passage. In turn, over 90% of the millions of slaves that were sent to the Caribbean or Brazil perished from exhaustion or injury while cultivating sugar and coffee, well as while mining in Brazil. This resulted in a demand for constant slave replacements. The Atlantic Slave Trade was the first time in history that slavery was specifically racial in character. Because it was Africans who were enslaved to work in the Americas under the control of Europeans, Europeans developed a range of racist theories to excuse the practice from its obvious immorality. In fact, the whole idea of human "race" is largely derived from the Slave Trade. Biologically, "race" is nothing more than a handful of unimportant cosmetic differences between people, but thanks to the history of the enslavement of Africans, Europeans in the early modern period led the charge in describing "race" as some kind of fundamental human category, with some races supposedly enjoying "natural" superiority. That conceit would obviously cast a perverse shadow on the present. The Enslavement of Africans The transatlantic slave trade was the largest long-distance coerced movement of people in history and, prior to the mid-nineteenth century, formed the major demographic well-spring for the re-peopling of the Americas following the collapse of the Amerindian population. Cumulatively, as late as 1820, nearly four Africans had crossed the Atlantic for every European, and, given the differences in the sex ratios between European and African migrant streams, about four out of every five females that traversed the Atlantic were from Africa. The Atlantic Ocean was once a formidable barrier that prevented regular interaction between those peoples inhabiting the four continents it touched; beginning in the late fifteenth century, it became a commercial highway that integrated the histories of Africa, Europe, and the Americas for the first time. As the above figures suggest, slavery and the slave trade were the linchpins of this process. With the decline of the Amerindian population, labor from Africa formed the basis for the exploitation of the gold and agricultural resources from the Americas, with sugar plantations absorbing well over two thirds of slaves carried across the Atlantic by the major European and Euro-American powers. For several centuries slaves were the most important reason for contact between Europeans and Africans. European expansion to the Americas mainly affected tropical and semi-tropical areas. Several products that were either previously unknown to Europeans (like tobacco) or previously had been a luxury for Europeans (like gold or sugar) could be now obtained by Europeans in abundant amounts. But while Europeans could control the production of such exotic goods, it became apparent in the first two centuries after 1500 that they chose not to supply the labor that would make such output possible. Free European migrants and indentured servants never traveled across the Atlantic in sufficient numbers to meet the labor needs of expanding plantations. Convicts and prisoners—the only Europeans who were ever forced to migrate—were too few in number. Slavery or some form of coerced labor was the only possible option if European consumers were to gain access to more tropical produce and precious metals. Europeans came rely on Africans as slaves due to the different values of societies around the Atlantic and, more particularly, the way groups of people involved in creating a trans-Atlantic community saw themselves in relation to others. In short, how they defined their identity. Ocean-going technology brought Europeans into large-scale face-to-face contact with peoples who were culturally and physically more different from themselves than any others with whom they had interacted in the previous millennium. In neither Africa nor Asia could Europeans initially threaten territorial control, with the single and limited exception of western Angola. African capacity to resist Europeans ensured that sugar plantations were established in the Americas rather than in Africa. But if Africans, aided by tropical pathogens, were able to resist the potential European invaders, some Africans were prepared to sell slaves to Europeans for use in the Americas. As this suggests, European domination of Amerindians was complete. Indeed, from the European perspective it was much too complete. The epidemic diseases of the Old World destroyed not only native American societies, but also a potential labor supply. Every society in history before 1900 provided at least an unthinking answer to the question of which groups are to be considered eligible for enslavement, and normally they did not recruit heavily from their own community. A revolution in ocean-going technology gave Europeans the ability to get continuous access to remote peoples and move them against their will over very long distances. Strikingly, it was much cheaper to obtain slaves in Europe than to send a vessel to the coast of Africa without proper harbors and remote from European political, financial, and military power. That this option was never seriously considered suggests a European inability to enslave other Europeans. Except for a few social deviants, neither Africans nor Europeans would enslave members of their own societies, but in the early modern period, Africans had a somewhat narrower conception of who was eligible for enslavement than Europeans had. It was this difference in definitions of eligibility for enslavement which explains the dramatic rise of the trans-Atlantic slave trade. Slavery, which had disappeared from northwest Europe long before this point, exploded into a far greater significance and intensity than it had possessed at any point in human history. The major cause was a dissonance in African and European ideas of eligibility for enslavement at the root of which lies culture or societal norms, not easily tied to economics. Without this dissonance, there would have been no African slavery in the Americas. Europeans shared a common Christian identity that discouraged them from enslaving fellow European believers, whereas African peoples were divided into diverse religions and cultures, who were willing to enslave peoples of opposing cultures. The slave trade was thus a product of differing constructions of social identity and the ocean-going technology that brought Atlantic societies into sudden contact with each other. The trans-Atlantic slave trade grew from a strong, initially European, demand for labor in the Americas, driven by consumers of plantation produce and precious metals. Because Amerindians died in large numbers, and insufficient numbers of Europeans were prepared to cross the Atlantic, the form that this demand took was shaped by conceptions of social identity on four continents, which ensured that the labor would comprise mainly slaves from Africa. But the central question of which peoples from Africa went to a given region of the Americas, and which group of Europeans or their descendants organized such a movement, cannot be answered without an understanding of the wind and ocean currents of the North and South Atlantic. There are two systems of wind and ocean currents in the North and South Atlantic that follow the pattern of giant wheels—one lies north of the equator turns clockwise, while its counterpart to the south turns counterclockwise. The northern wheel largely shaped the north European slave trade and was dominated by the English. The southern wheel shaped the huge traffic to Brazil, which for three centuries was almost the almost exclusive preserve of the largest slave traders of all, the Portuguese. Despite their use of the Portuguese flag, slave traders using the southern wheel ran their business from ports in Brazil, not in Portugal. Winds and currents thus ensured two major slave trades: the first rooted in Europe, the second in Brazil. Winds and currents also ensured that Africans carried to Brazil came overwhelmingly from Angola, with south-east Africa and the Bight of Benin playing smaller roles. Africans carried to North America, including the Caribbean, left from mainly West Africa, with the Bights of Biafra and Benin and the Gold Coast predominating. Just as Brazil overlapped on the northern system by drawing on the Bight of Benin, some slaves from northern Angola were carried into the Caribbean by the English, French, and Dutch. Early Slaving Voyages The first Africans forced to work in the New World left from Europe at the beginning of the sixteenth century, not from Africa. There were few vessels that carried only slaves on this early route, so that most would have crossed the Atlantic in smaller groups on vessels carrying many other commodities, rather than dedicated slave ships. Such a slave route was possible because an extensive traffic in African slaves from Africa to Europe and the Atlantic islands had existed for half a century before Columbian contact, such that ten percent of the population of Lisbon was black in 1455, and black slaves were common on large estates in the Portuguese Algarve. The first slave voyage direct from Africa to the Americas probably sailed in 1526. Before mid-century, all transatlantic slave ships sold their slaves in the Spanish Caribbean, with the gold mines in Cibao on Hispaniola emerging as a major purchaser. Cartagena, in modern Columbia, appears as the first mainland Spanish American destination for a slave vessel, which landed in the year 1549. On the African side, the great majority of people entering the early slave trade came from the Upper Guinea coast, and moved through Portuguese factories initially in Arguim, and later the Cape Verde islands. Nevertheless, the 1526 voyage set out from the other major Portuguese factory in West Africa—Sao Tome in the Bight of Biafra—though the slaves almost certainly originated in the Congo. The slave traffic to Brazil, eventually accounting for about forty percent of the trade, got underway around 1560. Sugar drove this traffic, as Africans gradually replaced the Amerindian labor force on which the early sugar mills (called engenhos) had depended from 1560 to 1620. By the time the Dutch invaded Brazil in 1630, Pernambuco, Bahia, and Rio de Janeiro were supplying almost all of the sugar consumed in Europe, and almost all the slaves producing it were African. Consistent with the earlier discussion of Atlantic wind and ocean currents, there were two major branches of the trans-Atlantic slave trade operating by 1640: one to Brazil, and the other to the mainland Spanish Americas. Together they accounted for less than 7,500 departures a year from the whole of sub-Saharan Africa, almost all of them by 1600 from west-central Africa. The sugar complex spread to the eastern Caribbean from the beginning of the 1640s. Sugar consumption steadily increased in Europe, and the slave system began two centuries of westward expansion across tropical and sub-tropical North America. At the end of the seventeenth century, gold discoveries in first Minas Gerais, and later in Goias and other parts of Brazil, began a transformation of the slave trade which triggered further expansion of the business. In Africa, the Bights of Benin and Biafra became major sources of supply, in addition to Angola, and were joined later by the more marginal provenance zones of Sierra Leone, the Windward Coast, and South-east Africa. The volume of slaves carried off reached thirty thousand per annum in the 1690s and eighty-five thousand a century later. More than eight out of ten Africans pulled into the traffic in the era of the slave trade made their journeys in the century and a half after 1700. Establishing the Trade In the fifteenth century, Portugal became the first European nation to take significant part in African slave trading. The Portuguese primarily acquired slaves for labor on Atlantic African island plantations, and later for plantations in Brazil and the Caribbean, though they also sent a small number to Europe. Initially, Portuguese explorers attempted to acquire African labor through direct raids along the coast, but they found that these attacks were costly and often ineffective against West and Central African military strategies. For example, in 1444, Portuguese marauders arrived in Senegal ready to assault and capture Africans using armor, swords, and deep-sea vessels. But the Portuguese discovered that the Senegalese outmaneuvered their ships using light, shallow water vessels better suited to the estuaries of the Senegalese coast. In addition, the Senegalese fought with poison arrows that slipped through their armor and decimated the Portuguese soldiers. Subsequently, Portuguese traders generally abandoned direct combat and established commercial relations with West and Central African leaders, who agreed to sell slaves taken from various African wars or domestic trading, as well as gold and other commodities, in exchange for European and North African goods. Over time, the Portuguese developed additional slave trade partnerships with African leaders along the West and Central African coast and claimed a monopoly over these relationships, which initially limited access to the trade for other western European competitors. Despite Portuguese claims, African leaders enforced their own local laws and customs in negotiating trade relations. Many welcomed additional trade with Europeans from other nations. The Portuguese developed a trading relationship with the Kingdom of Kongo, which existed from the fourteenth to the nineteenth centuries in what is now Angola and the Democratic Republic of Congo. Civil War within Kongo during the trans-Atlantic slave trade would lead to many of its subjects becoming captives traded to the Portuguese. When Portuguese, and later their European competitors, found that peaceful commercial relations alone did not generate enough enslaved Africans to fill the growing demands of the trans-Atlantic slave trade, they formed military alliances with certain African groups against their enemies. This encouraged more extensive warfare to produce captives for trading. While European-backed Africans had their own political or economic reasons for fighting with other African enemies, the end result for European traders in these military alliances was greater access to enslaved war captives. To a lesser extent, Europeans also pursued African colonization to secure access to slaves and other goods. For example, the Portuguese colonized portions of Angola in 1571 with the help of military alliances from Kongo, but were pushed out in 1591 by their former allies. Throughout this early period, African leaders and European competitors ultimately prevented these attempts at African colonization from becoming as extensive as in the Americas. The Portuguese dominated the early trans-Atlantic slave trade on the African coast in the sixteenth century. As a result, other European nations first gained access to enslaved Africans through privateering during wars with the Portuguese, rather than through direct trade. When English, Dutch, or French privateers captured Portuguese ships during Atlantic maritime conflicts, they often found enslaved Africans on these ships, as well as Atlantic trade goods, and they sent these captives to work in their own colonies. In this way, privateering generated a market interest in the trans-Atlantic slave trade across European colonies in the Americas. After Portugal temporarily united with Spain in 1580, the Spanish broke up the Portuguese slave trade monopoly by offering direct slave trading contracts to other European merchants. Known as the Asiento system, the Dutch took advantage of these contracts to compete with the Portuguese and Spanish for direct access to African slave trading, and the British and French eventually followed. By the eighteenth century, when the trans-Atlantic slave trade reached its trafficking peak, the British (followed by the French and Portuguese) had become the largest carriers of enslaved Africans across the Atlantic. The overwhelming majority of enslaved Africans went to plantations in Brazil and the Caribbean, and a smaller percentage went to North America and other parts of South and Central America. Empire and Slavery In the second half of the eighteenth century six imperial systems straddled the Atlantic, each one sustained by a slave trade. The English, French, Portuguese, Spanish, Dutch, and Danish all operated behind trade barriers (termed mercantilistic restrictions) and produced a range of plantation produce: sugar, rice, indigo, coffee, tobacco, alcohol, and some precious metals, with sugar usually being the most valuable. It is extraordinary that consumers’ pursuit of this limited range of exotic consumer goods, which collectively added so little to human welfare, could have generated the horrors and misery of the Middle Passage and plantation slavery for so long. Given the dominance of Portuguese and British slave traders, it is not surprising that Brazil and the British Americas received the most Africans, though both nations became adept at supplying foreign slave systems as well. Throughout the slave trade, more than seven out of every ten slaves went to these regions. The French Americas imported about half the slaves that the British did, with the majority going to Saint-Domingue. The Spanish flag, which dominated in the earliest phase of the trade before retreating in the face of competition, began to expand again in the late nineteenth century with the growth of the Cuban sugar economy. By the next century—between 1750 and 1850—every one of these empires had either disappeared or become severely truncated. A massive shift to freer trade meant that, instead of six plantation empires controlled from Europe, there were now only three plantation complexes: two of which—Brazil and the United States—were independent, and the third, Cuba, was far wealthier and more dynamic than its European owner. Extreme specialization led to the United States producing most of the world’s cotton, Cuba most of the world’s sugar, and Brazil with a similar dominance in coffee. Slaves thus might disembark in six separate jurisdictions in the Americas in the eighteenth century. But by 1850 they went overwhelmingly to only two areas: Brazil and Cuba. American cotton planters drew on Africa for almost none of their labor needs, relying instead on natural population growth and a domestic slave trade. Indeed, overall the United States absorbed only 5 percent of the slaves arriving in the Americas. This massive reorganization of the traffic and the rapid natural growth of the US slave population had little immediate impact on the size of the slave trade. The British, Americans, Danish, and Dutch dropped out of the slave trade, but the decade 1821 to 1830 still saw over 80,000 people a year leaving Africa in slave ships. Well over a million more—one tenth of the volume carried off in the slave trade era—followed in the next twenty years. The Transatlantic Slave Trade The African Slave Trade involved interaction of different peoples fron Europe, Africa, and the Americas. Learning Objectives - Describe the factors that led to the delevlopment of the African Slave Trade in Europe, Americas, and Africa Key Terms / Key Concepts Yemasee War: A conflict between English colonists and the indigenous Yemasee people (1715 -1717) that arose due to the enslavement of the Yemasee by colonial slave traders. Dahomey: An African kingdom in West Africa that grew wealthy on the Transatlantic slave trade in the 18th century. Black Caribs: Peoples on the island of St. Vincent in the Caribbean where Africans intermarried and adopted the culture of the indigenous Carib people. The Transatlantic Slave Trade The emergence of Trans-Atlantic Slave trade overlaps with other important historical developments of the Early Modern World: the growth of capitalism and the Age of Exploration. Due to the demand in European markets for luxury goods from distant lands, merchants desired to obtain these commodities such as tobacco and sugar, which were cultivated in the newly discovered lands of the Western Hemisphere. However, epidemic diseases, introduced by the Columbian Exchange that accompanied the Age of Exploration, had decimated the indigenous population. There was thus a severe shortage of labor across the Western Hemisphere. To meet the demand for labor, slave traders exported African slaves across the Atlantic to the colonies of the New World in large numbers beginning in the 16th century. In the English colonies of Barbados, Virginia, Carolina, and Maryland in North America in the 17th century, European indentured servants and enslaved natives at first served primarily as the labor force. However, the enslavement of natives resulted in costly wars between native tribes and the colonists (Yemassee War -1715-1717) and indentured servants had to be set free after an contractually agreed period (usually 4 -7 years). Consequently, by the early 18th century African slaves had become the primary labor force in these colonies as well. What was the Trans-Atlantic Slave Trade? The Trans-Atlantic slave trade was a flourishing business in world trade from the 16th through the 19th century. Although the term ‘African’ can refer to any native of the African continent, the African peoples enslaved and exported across the Atlantic largely inhabited west central and east central Africa in what is the modern nations of Nigeria, Angola and Mozambique. The enslavement and sale of slaves was a business operation that required the cooperation of European and African agents. Africans did not hesitate to sell other Africans into slavery. For example, the kingdom of Dahomey in West Africa in the 18th century conducted raids and wars against neighboring peoples to secure a steady supply of slaves to sell to European slave traders. Likewise, Scandinavian Vikings in the 9th century CE had no qualms about enslaving European Slavs in what is today Russia and selling them to Muslim slave traders. African Muslims did object to enslaving African Muslims, but the Muslim Fulani people (modern northern Nigeria) would enslave the non-Muslim Igbo and Yoruba peoples (modern southern Nigeria) into slavery. Likewise, European slave traders justified their actions because the enslaved Africans were not Christians. The Transatlantic slave trade was also a risky venture. Slave traders in crossing the Atlantic, limited the number of adult males in their cargoes and transported a majority of woman and children since they feared slave uprisings on their ships. Once successfully transported and sold in the Western Hemisphere, African slaves escaped and formed new independent communities in areas where the control of colonial authorities was weak or non-existent, such as the Black Caribs of St Vincent in the Caribbean, the Maroons in the interior highlands of Jamaica, the Quillombos of Brazil, and the Angola community in Spanish Florida. Enslaved Africans also rose up violently in open rebellion against their owners, such as the Stono Rebellion in South Carolina (1739), the Nat Turner Rebellion in Virginia (1831), and the so-called “Baptist War” in Jamaica (1831). The massive slave rebellion on the French colony of Saint-Domingue (1791) resulted ultimately in the creation of the independent republic of Haiti in 1804. The experience of enslaved Africans in the Western Hemisphere also varied from region to region. Across much of Latin America and the Caribbean, slave owners often freed their African slaves or their children after the Roman Catholic Church baptized these slaves as Christians. Consequently, slave traders constantly imported new slaves from Africa into these regions. Children born from the unions of slave owners and slave women were also free. In the English North American colonies and later the United States, however, slave owners rarely freed their slaves even when these slaves converted to the Christian faith, and children born from enslaved women largely remained slaves even if their fathers were slave owners. Consequently, the slave population continued to expand in the United States even after the United States banned the import of African slaves in 1807. Even though the sale and ownership of slaves was a risky and dangerous business investment, the enslavement and transportation of an estimated 12 million Africans to the Western Hemisphere over three centuries is a testament to the power of the profit motive in a capitalist system. Attributions Title Image Illustration in anti-slavery book by Blake, William, 1860 - Internet Archive Book Images, No restrictions, via Wikimedia Commons Adapted from: http://creativecommons.org/licenses/by-nc-sa/3.0/us/ https://guides.hostos.cuny.edu/lac118/3-1 http://creativecommons.org/licenses/by-nc/4.0/ https://courses.lumenlearning.com/atd-tcc-worldciv2/chapter/the-transatlantic-slave-trade-2/ https://creativecommons.org/licenses/by-nc-nd/4.0/
oercommons
2025-03-18T00:35:08.771193
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87893/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/87894/overview
Impact of the Transatlantic Slave Trade Overview Impact of the Transatlantic Slave Trade The Transatlantic slave trade negatively affected the peoples and societies of Western and Central Africa. Learning Objectives Evaluate the effects slavery had on economic and social life on African peoples, as well as on African states. Key Terms / Key Concepts Elmina: a fortified slave castle (feitoria) on the West African coast, now Ghana Middle Passage: the voyage across the Atlantic from Africa to the Americas, comprised the middle leg of the trans- Atlantic slave trade Bight of Biafra: the “bend” (bight) of the central West African coast in a southerly direction Whydah: West African port for exporting enslaved Africans used by the Kingdom of Dahomey in the 18th century Luanda: West African port and Portuguese colony founded in 1575 (From this port alone an estimated 1.3 million enslaved Africans were exported to the Western Hemisphere, primarily to Brazil in South America.) Jihad: according to Islamic teaching, Muslims are obligated to “struggle” (jihad), so that they will obey God’s laws (the greater jihad) and non-Muslims will obey God’s laws (the lesser jihad) Sufism: mystical teaching of Islam that seeks spiritual unity with Allah (God) Tariqa: the different schools of thought or brotherhoods into which Sufis are divided Impact of the Transatlantic Slave Trade on the Peoples of Africa The trans-Atlantic slave trade was the largest long-distance forced movement of people in recorded history. From the sixteenth to the late nineteenth centuries, over twelve million (some estimates run as high as fifteen million) African men, women, and children were enslaved, transported to the Americas, and bought and sold primarily by European and Euro-American slaveholders as chattel property used for their labor and skills. The trans-Atlantic slave trade occurred within a broader system of trade between West and Central Africa, Western Europe, and North and South America. In African ports, European traders exchanged metals, cloth, beads, guns, and ammunition for captive Africans brought to the coast from the African interior, primarily by African traders. Many captives died during the long overland journeys from the interior to the coast. European traders then held the enslaved Africans who survived in fortified slave castles before forcing them into ships for the Middle Passage across the Atlantic Ocean; some of the slave castles were Elmina in the central region (now Ghana), Goree Island (now in present day Senegal), and Bunce Island (now in present day Sierra Leone). At first, some Europeans tried to use force in acquiring slaves, but this method proved impracticable. The only workable method was acquiring slaves through trade with Africans, since they controlled all trade into the interior. Typically, Europeans were restricted to trading posts, or feitorias, along the coast. Captives were brought to the feitorias, where they were processed as cargo rather than as human beings. Slaves were kept imprisoned in small, crowded rooms, segregated by sex and age, and “fattened up” if they were deemed too small for transport. They were branded to show what merchant purchased them, that taxes had been paid, and even that they had been baptized as a Christian. The high mortality rate of the slave trade began on the forced march to the feitorias and a slave’s imprisonment within them. The mortality rate continued to climb during the second part of the journey, the Middle Passage. The Middle Passage, the voyage across the Atlantic from Africa to the Americas, comprised the middle leg of the Atlantic Triangle Trade network, which traded manufactured goods such as beads, mirrors, cloth, and firearms to Africa for slaves. Slaves were then carried to the Americas, where their labor would produce items of the last leg of the Triangle Trade, such as sugar, rum, molasses, indigo, cotton, and rice. The Middle Passage itself was a hellish experience. Slaves were segregated by sex, often stripped naked, chained together, and kept in extremely tight quarters for up to twenty-three hours a day. As many as 12 – 13 percent died during this dehumanizing experience. Although we will likely never know the exact number of people who were enslaved and brought to the Americas, the number is certainly larger than ten million. Slaves who arrived at various ports in the Americas were then sold in public auctions or smaller trading venues to plantation owners, merchants, small farmers, prosperous tradesmen, and other slave traders. These traders could then transport slaves many miles further to sell on other Caribbean islands or into the North or South American interior. Predominantly European slaveholders purchased enslaved Africans to provide labor that included domestic service and artisanal trades. The majority, however, provided agricultural labor and skills to produce plantation cash crops for national and international markets. Slaveholders used profits from these exports to expand their landholdings and purchase more enslaved Africans, perpetuating the trans-Atlantic slave trade cycle for centuries, until various European countries and new American nations officially ceased their participation in the trade in the nineteenth century (though illegal trans-Atlantic slave trading continued even after national and colonial governments issued legal bans). Overview of the impact of the Trans-Atlantic Slave Trade on Africa The trans-Atlantic slave trade impacted the societies of West and East African peoples, who were often engaged in the trafficking of slaves to European slave traders. The sheer human and environmental diversity of the African continent makes it difficult to examine the trade from Africa as a whole. The slave trade did not expand, nor, indeed, decline, in all areas of Africa at the same time. Rather, a series of marked expansions (and declines) in individual regions contributed to a more gradual composite trend for sub-Saharan Africa as a whole. Each region that exported slaves experienced a marked upswing in the amount of slaves it supplied for the trans-Atlantic trade and, from that point, the normal pattern was for a region to continue to export large numbers of slaves for a century or more. The three regions that provided the fewest slaves—Senegambia, Sierra Leone, the Windward Coast in West Africa—reached these higher levels for much shorter periods. By the third quarter of the eighteenth century, all regions had undergone an intense expansion of slave exports. A cargo of slaves could be sought at particular points along the entire Western African coast. As the Brazilian coffee and sugar boom got under way near the end of the eighteenth century, slavers rounded the Cape of Good Hope and traveled as far as southeast Africa to fill their vessels’ holds. But while the slave trade pervaded much of the African coast, its focus was no less concentrated in particular African regions than it was among European carriers. West Central Africa, the long stretch of coast south of Cape Lopez and stretching to Benguela, sent more slaves than any other part of Africa every quarter century with the exception of a fifty-year period between 1676 and 1725. From 1751 to 1850, this region supplied nearly half of the entire African labor force in the Americas; in the half century after 1800, West Central Africa sent more slaves than all the other African regions combined. Overall, the center of gravity of the volume of the trade was located in West Central Africa by 1600. It then shifted northward slowly until about 1730, before gradually returning back to its starting point by the mid-nineteenth century. Further, slaves left from relatively few ports of embarkation within each African region, even though their origins and ethnicities could be highly diverse. Although Whydah, on the Slave Coast, was once considered the busiest African slaving port on the continent, it now appears that it was surpassed by Luanda, in West Central Africa, and by Bonny, in the Bight of Biafra. These three most active ports together accounted for 2.2 million slave departures. The trade from each of these ports assumed a unique character and followed very different temporal profiles. Luanda alone dispatched some 1.3 million slaves, actively participating in the slave trade from as early as the 1570s—when the Portuguese established a foothold there—through the nineteenth century. Whydah supplied slaves over a shorter period of time and was a dominant port for only thirty years prior to 1727. Bonny, probably the second largest point of embarkation in Africa, sent four out of every five of all the slaves it ever exported in just the eighty years between 1760 and 1840. It is not surprising, therefore, that some systematic links between Africa and the Americas can be perceived. As research on the issue of trans-Atlantic connections has progressed, it has become clear that the distribution of Africans in the New World is no more random than the distribution of Europeans. Eighty percent of the slaves who went to southeast Brazil were taken from West Central Africa. Bahia traded in similar proportions with the Bight of Benin. Cuba represents the other extreme: no African region supplied more than 28 percent of the slave population in this region. Most American import regions fell between these examples, drawing on a mix of coastal regions that diversified as the trade from Africa grew to incorporate new peoples. The Kingdom of Dahomey European merchants and explorers brought many changes to West Africa. In some areas, the slave trade had the effect of breaking down societies. For instance, in the early nineteenth century the great Oyo Yoruba confederation of states began to break down due to civil wars. Conflicts escalated as participants sold slaves to acquire European weapons; these weapons were then used to acquire more slaves, thus creating a vicious cycle. Other groups grew and gained power because of their role in the slave trade, perhaps the most prominent being the West African kingdom of Dahomey. The Kingdom of Dahomey was established in the 1720s. Dahomey was built on the slave trade; kings used profits from the slave trade to acquire guns, which in turn were used to expand their kingdom by conquest and incorporation of smaller kingdoms. Most slaves were acquired either by trade with the interior or by raids into the north and west into Nigeria. Dahomey also took advantage of the civil wars among the Yoruba to gain access to a ready source of captives. European trade agents were kept isolated in the main trade port of Whydah. Only a privileged few were allowed into the interior of the kingdom to have an audience with the king; as a result, only a few contemporary sources describe the kingdom. Like his European counterparts, the king of Dahomey was an absolute monarch, possessing great power in a highly centralized state. All trade with Europeans was a royal monopoly, jealously guarded by the kings. The monarchs never allowed Europeans to deal directly with the people of the kingdom, keeping all profits for the state, and allowing this highly militarized state to grow and expand. Diverse peoples of West Africa To the northwest of the kingdom of Dahomey, a number of West African peoples were impacted by the slave trade. From the 14th through the 18th century, three smaller political states emerged in the forests along the coast of Africa, below the Songhai Empire. The uppermost groups of states were the Gonja or Volta Kingdoms, located around the Volta River and the confluence of the Niger on what was called the Windward Coast, now Sierra Leone and Liberia. Most of the people in the upper region of the Windward Coast belonged to a common language group, called Gur by linguists. They also held common religious beliefs and a common system of land ownership. They lived in decentralized societies where political power resided in associations of men and women. Below the Volta lay the Asante Empire in the southeastern geographical area of the contemporary nations of Cote d’Ivoire and Togo, as well as modern Ghana. By the 15th century the Akan peoples, who included the Baule and Twi-speaking Asante, reached dominance in the central region. Akan culture had a highly evolved political system. One hundred years or more before the rise of democracy in North America, the Asante governed themselves through a constitution and assembly. Commercially the Asante-dominated region straddled the African trade routes that carried ivory, gold, and grain. As a result, Europeans called various parts of the region the Ivory Coast, Grain Coast, and Gold Coast. The transatlantic slave trade was fed by the emergence of these Volta Kingdoms and the Asante Empire, which was a contemporary of the Dahomey Kingdom. During the 17th and early 18th centuries African people taken from these regions were predominately among those enslaved in the British North American mainland colonies. Yorubaland: Introduction The Ibo people, found around the Bight of Biafra to the southeast of Yorubaland, predominated among those enslaved in the Chesapeake region of Virginia during the late 17th and early 18th century. Yorubaland is the cultural region of the Yoruba people in West Africa. It spans the modern-day countries of Nigeria, Togo, and Benin. Yorubaland lay along the West African coast along the Bights of Benin and Biafra, where the important slave trading station of Bonny was located. Its pre-modern history is based largely on oral traditions and legends. According to Yoruba religion, Olodumare—the Supreme God—ordered Obatala to create the earth, but on Obatala’s way he found palm wine, which he drank and became intoxicated. Therefore, his younger brother Oduduwa took the three items of creation from him, climbed down from the heavens on a chain, and threw a handful of earth on the primordial ocean; he then put a baby rooster on it so that it would scatter the earth, thus creating the land on which Ile-Ife would be built. On account of his creation of the world, Oduduwa became the ancestor of the first divine king of the Yoruba, while Obatala is believed to have created the first Yoruba people out of clay. The meaning of the word “ife” in Yoruba is “expansion.” “Ile-Ife” is therefore in reference to the myth of origin, “The Land of Expansion.” Ile-Ife Evidence suggests that as of the 7th century BCE, the African peoples who lived in Yorubaland were not initially known as the Yoruba, though they shared a common ethnicity and language group. By the 8th century CE, Ile-Ife was already a powerful Yoruba kingdom, one of the earliest in Africa south of the Sahara-Sahel. Almost every Yoruba settlement traces its origin to princes of Ile-Ife. As such, Ife can be regarded as the cultural and spiritual homeland of the Yoruba nation. Archaeologically, the settlement at Ife can be dated to the 4th century BCE, with urban structures appearing in the 12th century CE. The Oòni (or king) of Ife today still claim direct descent from Oduduwa. Ile-Ife was a settlement of substantial size between the 12th and 14th centuries, with houses featuring potsherd pavements. The city is known worldwide for its ancient and naturalistic bronze—as well as stone and terracotta—sculptures, which reached their peak of artistic expression between 1200 and 1400. In the period around 1300 the artists at Ile-Ife developed a refined and naturalistic sculptural tradition in terracotta, stone, and copper alloy—copper, brass, and bronze—many of which appear to have been created under the patronage of King Obalufon II—the man who today is identified as the Yoruba patron deity of brass casting, weaving, and regalia. After this period, production declined as political and economic power shifted to the nearby kingdom of Benin, which, like the Yoruba kingdom of Oyo, developed into a major empire. The Rise of the Oyo Empire The mythical origins of the Oyo Empire lie with Oranyan (also known as Oranmiyan), the second prince of Ile-Ife, who made Oyo his new kingdom and became the first oba with the title of Alaafin of Oyo (Alaafin means “owner of the palace” in Yoruba). The oral tradition holds that he left all his treasures in Ile-Ife and allowed another king, named Adimu, to rule there. Oranyan was succeeded by Oba Ajaka, but he was deposed because he allowed his sub-chiefs too much independence. Leadership was then conferred upon Ajaka’s brother, Shango, who was later deified as the deity of thunder and lightning. Ajaka was restored after Shango’s death. His successor, Kori, managed to conquer the rest of what later historians would refer to as metropolitan Oyo. The heart of metropolitan Oyo was its capital at Oyo-Ile. Oyo had grown into a formidable inland power by the end of the 14th century, but it suffered military defeats at the hands of the Nupe led by Tsoede. Sometime around 1535, the Nupe occupied Oyo and forced its ruling dynasty to take refuge in the kingdom of Borgu. The Yoruba of Oyo went through an interregnum of eighty years as an exiled dynasty. However, they re-established Oyo to be more centralized and expansive than ever. During the 17th century, Oyo began a long stretch of growth, becoming a major empire. It never encompassed all Yoruba-speaking people, but it was the most populous kingdom in Yoruba history. The Oyo Empire rose through the outstanding organizational skills of the Yoruba, gaining wealth from trade and its powerful cavalry. It was the most politically important state in the region from the mid-17th century to the late 18th century, holding sway not only over most of the other kingdoms in Yorubaland but also over nearby African states, notably the Fon Kingdom of Dahomey in the modern Republic of Benin to the west. The Power of Oyo The key to Yoruba rebuilding Oyo was a stronger military and a more centralized government. Oba Ofinran succeeded in regaining Oyo’s original territory from the Nupe. A new capital, Oyo-Igboho, was constructed, and the original became known as Old Oyo. The next oba, Eguguojo, conquered nearly all of Yorubaland. Despite a failed attempt to seize the Benin Empire sometime between 1578 and 1608, Oyo continued to expand. The Yoruba allowed autonomy to the southeast of metropolitan Oyo, where the non-Yoruba areas could act as a buffer between Oyo and Imperial Benin. By the end of the 16th century, the Ewe and Aja states of modern Benin were paying tribute to Oyo. The reinvigorated Oyo Empire began raiding southward as early as 1682. By the end of its military expansion, its borders would reach to the coast some 200 miles southwest of its capital. At the beginning, the people were concentrated in metropolitan Oyo. With imperial expansion, Oyo reorganized to better manage its vast holdings within and outside Yorubaland; it was divided into four layers defined by relation to the core of the empire. These layers were Metropolitan Oyo, southern Yorubaland, the Egbado Corridor, and Ajaland. The Oyo Empire developed a highly sophisticated political structure to govern its territorial domains. Scholars have not determined how much of this structure existed prior to the Nupe invasion. Some of Oyo’s institutions are clearly derivative of early accomplishments in Ife. The Oyo Empire was not a hereditary monarchy, nor an absolute one. While the Alaafin of Oyo was supreme overlord of the people, he was not without checks on his power. The Oyo Mesi (seven councilors of the states) and the Yoruba Earth cult—known as Ogboni—kept the Oba’s power in check. The Oyo Mesi spoke for the politicians while the Ogboni spoke for the people, backed by the power of religion. The power of the Alaafin of Oyo in relation to the Oyo Mesi and Ogboni depended on his personal character and political shrewdness. Oyo became the southern emporium of the trans-Saharan trade. Exchanges were made in salt, leather, horses, kola nuts, ivory, cloth, and slaves. The Yoruba of metropolitan Oyo were also highly skilled in craft making and iron work. Aside from taxes on trade products coming in and out of the empire, Oyo also became wealthy off the taxes imposed on its tributaries. Oyo’s imperial success made Yoruba a lingua franca almost to the shores of the Volta. Toward the end of the 18th century, the empire acted as a go-between for both the trans-Saharan and trans-Atlantic slave trade. By 1680, the Oyo Empire spanned over 150,000 square kilometers. Decline In the second half of the 18th century, dynastic intrigues, palace coups, and failed military campaigns began to weaken the Oyo Empire. Recurrent power struggles and resulting periods without a reigning king created a vacuum, in which the power of regional commanders rose. As Oyo tore itself apart via political intrigue, its vassals began taking advantage of the situation to press for independence. Some of them succeeded, and Oyo never regained its prominence in the region. It became a protectorate of Great Britain in 1888 before further fragmenting into warring factions. The Oyo state ceased to exist as any sort of power in 1896. Sokoto Caliphate North of the Oyo state in West Africa, the Sokoto Caliphate arose as a sovereign Sunni Muslim caliphate in West Africa that was founded during the jihad of the Fulani War in 1804 by Usman dan Fodio. It was dissolved when the British conquered the area in 1903 and annexed it into the newly established Northern Nigeria Protectorate. Developed in the context of multiple independent Hausa Kingdoms, at its height the caliphate linked over 30 different emirates and over 10 million people in the most powerful state in the region and one of the most significant empires in Africa in the nineteenth century. Bringing decades of economic growth throughout the region, the caliphate was a loose confederation of emirates that recognized the Amir al-Mu'minin, the Sultan of Sokoto as their overlord. An estimated 1 million to 2.5 million non-Muslim slaves were captured during the Fulani War. Slaves provided labor for plantations and were provided an opportunity to become Muslims. Rise of the Sokoto Caliphate The major power in the region in the 17th and 18th centuries had been the Bornu Empire. However, revolutions and the rise of new forces decreased the power of the Bornu empire, and by 1759 its rulers had lost control over the oasis town of Bilma and access to the Trans-Saharan trade. Vassal cities of the empire gradually became autonomous, and the result by 1780 was a political array of independent states in the region. The fall of the Songhai Empire in 1591 to Morocco had freed much of central Africa, and a number of Hausa sultanates led by different Hausa aristocracies had grown to fill the void. Three of the most significant to develop were the sultanates of Gobir, Kebbi (both in the Rima River valley), and Zamfara, all in present-day Nigeria. These kingdoms engaged in regular warfare against each other, especially in conducting slave raids. To pay for the constant warfare, they imposed high taxation on their citizens. The region between the Niger River and Lake Chad was largely populated with the Hausa, the Fulani, and other ethnic groups that had immigrated to the area, such as the Tuareg. Much of the Hausa population had settled in the cities throughout the region and became urbanized. The Fulani, in contrast, had largely remained a pastoral community, herding cattle, goats, and sheep; they populated grasslands between the towns throughout the region. With increasing trade, a good number of Fulani settled in towns, forming a distinct minority. Much of the population had converted to Islam in the centuries before; however, local pagan beliefs persisted in many areas, especially in the aristocracy. At the end of the 1700s, an increase in Islamic preaching occurred throughout the Hausa kingdoms. A number of the preachers were linked in a shared school of Islamic study. Scholars were invited or traveled to the Hausa lands from Muslim North Africa and joined the courts of some sultanates, such as in Kano. These scholars preached a return to adherence to Islamic tradition. Usman dan Fodio, an Islamic scholar and an urbanized Fulani, had been actively educating and preaching in the city of Gobir with the approval and support of the Hausa leadership of the city. However, when Yunfa, a former student of dan Fodio, became the sultan of Gobir, he restricted dan Fodio's activities, eventually forcing him into exile in Gudu. A large number of people left Gobir to join dan Fodio, who also began to gather new supporters from other regions. Feeling threatened by his former teacher, Yunfa declared war on dan Fodio on February 21, 1804. Usman dan Fodio was elected "Commander of the Faithful" (Amir al-Mu'minin) by his followers, marking the beginning of the Sokoto state. Usman dan Fodio then created a number of flag bearers amongst those following him, creating an early political structure of the empire. Declaring a jihad against the Hausa kings, dan Fodio rallied his primarily Fulani “warrior-scholars” against Gobir. Despite early losses at the Battle of Tsuntua and elsewhere, the forces of dan Fodio began taking over some key cities starting in 1805. The Fulani used guerrilla warfare to turn the conflict in their favor and gathered support from the civilian population, which had come to resent the despotic rule and high taxes of the Hausa kings. Even some non-Muslim Fulani started to support dan Fodio. The war lasted from 1804 until 1808, and it resulted in thousands of deaths. The forces of dan Fodio were able to capture the states of Katsina and Daura, the important kingdom of Kano in 1807, and finally Gobir in 1809. In the same year, Muhammed Bello, the son of dan Fodio, founded the city of Sokoto, which became the capital of the Sokoto state. The jihad had created a new slaving frontier on the basis of rejuvenated Islam. By 1900 the Sokoto state had at least 1 million and perhaps as many as 2.5 million slaves, second in size only to the United States (which had 4 million in 1860), among all modern slave societies. However, there was far less of a distinction between slaves and their masters in the Sokoto state. Expansion of the Sokoto State From 1808 until the mid-1830s, the Sokoto state expanded, gradually annexing the plains to the west and key parts of Yorubaland. It became one of the largest states in Africa, stretching from modern-day Burkina Faso to Cameroon and including most of northern Nigeria and southern Niger. At its height, the Sokoto state included over 30 different emirates under its political structure. The political structure of the state was organized with the sultan of Sokoto ruling from the city of Sokoto (and for a brief period under Muhammad Bello from Wurno). The leader of each emirate was appointed by the sultan as the flag bearer for that city but was given wide independence and autonomy. Much of the growth of the state occurred through the establishment of an extensive system of ribats as part of the consolidation policy of Muhammed Bello—the second Sultan. Ribats were established, founding a number of new cities with walled fortresses, schools, markets, and other buildings. These proved crucial in expansion through developing new cities, settling the pastoral Fulani people, and supporting the growth of plantations which were vital to the economy. By 1837, the Sokoto state had a population of around 10 million people. Administrative Structure The Sokoto state was largely organized around a number of mostly independent emirates pledging allegiance to the sultan of Sokoto. The administration was initially built to follow the teachings of the prophet Muhammad as well as the theories of Al-Mawardi found in “The Ordinances of Government.” The Hausa kingdoms prior to Usman dan Fodio had been run largely through hereditary succession. The early rulers of Sokoto, dan Fodio and Bello, abolished systems of hereditary succession, preferring leaders to be appointed by virtue of their Islamic scholarship and moral standing. Emirs were appointed by the sultan; they traveled yearly to pledge allegiance and deliver taxes in the form of crops, cowry shells, and slaves. When a sultan died or retired from the office, an appointment council made up of the emirs would select a replacement. Direct lines of succession were largely not followed, although each sultan claimed direct descent from dan Fodio. Major administrative authority in the empire was divided between Sokoto and the Gwandu Emirates. In 1815, Usman dan Fodio retired from the administrative business of the state and divided the area taken over during the Fulani War. He appointed his brother Abdullahi dan Fodio to rule in the west in the Gwandu Emirate and appointed his son Muhammed Bello to govern the Sokoto Sultanate. The Emir at Gwandu retained allegiance to the Sokoto Sultanate and spiritual guidance from the sultan, but the emir managed the separate emirates under his supervision independently from the sultan. The administrative structure of loose allegiances of the emirates to the sultan did not always function smoothly. There was a series of revolutions by the Hausa aristocracy in 1816 – 1817 during the reign of Muhammed Bello, but the sultan ended these by granting the leaders titles to land. There were multiple crises that arose during the 19th century between the Sokoto Sultanate and many of the subservient emirates: notably, the Adamawa Emirate and the Kano Emirate. A serious revolt occurred in 1836 in the city-state of Gobir, which was crushed by Muhammed Bello at the Battle of Gawakuke. The Sufi community throughout the region proved crucial in the administration of the state. The Tariqa brotherhoods, most notably the Qadiriyya, to which every successive sultan of Sokoto was an adherent, provided a group linking the distinct emirates to the authority of the sultan. Scholars claim that this Islamic scholarship community provided an “embryonic bureaucracy” that linked the cities throughout the Sokoto state. Economy After the establishment of the Caliphate, there were decades of economic growth throughout the region, particularly after a wave of revolts in 1816 – 1817. The Sokoto Caliphate established significant trade over the trans-Saharan routes. After the Fulani War, all land in the empire was declared waqf—owned by the entire community. However, the Sultan allocated land to individuals or families, as could an emir. Such land could be inherited by family members but could not be sold. Exchange was based largely on slaves, cowries, or gold. Major crops produced included cotton, indigo, kola and shea nuts, grain, rice, tobacco, and onion. Slavery remained a large part of the economy, although its operation changed with the end of the Atlantic slave trade in the early 19th century. Slaves were gained through raiding and via markets, just as they had earlier been in West Africa. The founder of the Caliphate allowed slavery only for non-Muslims; this was viewed as a regulation that would bring non-Muslims into the Muslim community. However, the expansion of agricultural plantations under the Caliphate was dependent on slave labor, and around half of the Caliphate's population was enslaved in the 19th century. The plantations were established around the ribats, and large areas of agricultural production took place around the cities of the empire. The institution of slavery was mediated by the lack of a racial barrier among the peoples, and by a complex and varying set of relations between owners and slaves, which included the right to accumulate property by working on their own plots, manumission, and the potential for slaves to convert and become members of the Islamic community. There are historical records of slaves reaching high levels of government and administration in the Sokoto Caliphate. Its commercial prosperity was also based on Islamic traditions, market integration, internal peace, and an extensive export-trade network. Kingdom of Kongo The Kongdom of Kongo is significant in exploring the historic contexts of African American heritage because the majority of all Africans enslaved in the Southern English colonies were from West Central Africa. The history and culture of West Central African peoples is thus important to the understanding of African American people in the present because of their high representation among enslaved peoples. It has been estimated that 69% of all African people transported in the Transatlantic Slave Trade between 1517 – 1700 CE were from West Central Africa and, between 1701 – 1800, people from West Central Africa comprised about 38% of all Africans brought to the West to be enslaved. In South Carolina, by 1730, the number of Africans or “salt-water negroes,” mostly from West Central Africa, and “native-born” African Americans, many descended from West Central Africans, exceeded the white population. However, slave traders trnasported the majority of enslaved Africans from this region to Brazil. To the south of the Bights of Biafra and Benin in West Central Africa, the Portuguese under the leadership of Paulo Dias de Novais established a protectorate over the Kingdom of Kongo and founded a colony at Luanda in 1575, in the modern nation of Angola. The city of Luanda became one of the main ports for the export of enslaved Africans across the Atlantic. In the century before Portuguese exploration of West Africa, the Kongo was another kingdom that developed in West Central Africa. In the three hundred years from the date Ne Lukeni Kia Nzinga founded the kingdom until the Portuguese destroyed it in 1665, Kongo was an organized, stable, and politically centralized society based on a subsistence economy. The Bakongo (the Kongo people), today several million strong, live in the modern Democratic Republic of the Congo, Congo-Brazzaville, neighboring Cabinda, and Angola. The present division of their territory into modern political entities masks the fact that the area was once united under the suzerainty of the ancient Kingdom of Kongo—one of the most important civilizations ever to emerge in Africa. The Kings of the Kongo ruled over an area stretching from the Kwilu-Nyari River, just north of the port of Loango, to the river Loje in northern Angola, and from the Atlantic to the inland valley of the Kwango. The Kongo encompassed an area roughly equaling the miles between New York City and Richmond, Virginia, in terms of coastal distance and between Baltimore and Eire, Pennsylvania, in terms of inland breadth. By 1600, after a century of overseas contact with the Portuguese, the complex Kongo kingdom dominated a region more than half the size of England which stretched from the Atlantic to the Kwango. The Bakongo shared a common culture with the people of eight adjoining regions, all of whom were either part of the Kongo Kingdom during the transatlantic slave trade or were part of the kingdoms formed by peoples fleeing from the advancing armies of Kongo chiefdoms. In their records slave traders called the Bakongo, as well as the people from the adjoining regions “Congos” and “Angolas,” although they may have been Mbembe, Mbanda, Nsundi, Mpangu, Mbata, Mbamba or Loango. Ki-Kongo-speaking groups inhabited the West Central African region then known as the Loango Coast—the term used to describe a historically significant area of West Central Africa extending from Cape Lopez or Cape Catherine in Gabon to Luanda in Angola. Within this region, Loango has been the name of a kingdom, a province, and a port. Once linked to the powerful Kongo Kingdom, the Loango Kingdom was dominated by the Villi—a Kongo people who migrated to the coastal region during the 1300s. Loango became an independent state probably in the late 1300s or early 1400s. Along with two other Kongo-related kingdoms, Kakongo, and Ngoyo (present day Cabinda), it became one of the most important trading states north of the Congo River. A common social structure was shared by people in the coastal kingdoms of Loango, Kakongo, Ngoyo, Vungu, and the Yombe chiefdoms, as well as the Teke federation in the east and the Nsundi societies on either side of the Zaire River from the Matadi/Vungu area in the west to Mapumbu of Malebo pool in the east. The provincial regions, districts, and villages each had chiefs and a hierarchical system through which tribute flowed upward to the King of the Kongo and rewards flowed downward. Each regional clan or group had a profession or craft, such as weaving, basket making, potting, and iron working. Tribute and trade consisted of natural resources, agricultural products, textiles, other material cultural artifacts, and cowries shells. The “Kongos” and “Angolas” shared a “ lingua franca ” or trade language that allowed them to communicate. They also shared other cultural characteristics, such as matrilineal social organization and a cosmology or world view expressed in their religious beliefs and practices. Woman-and-child figures are visual metaphors for both individual and societal fertility among Kongo Peoples; these images reflect their matrilineal social organization—the tracing of kinship through the mother’s side of the family. The mother and child was a common theme representing a woman who has saved her family line from extinction. Matrilineal social organization and certain cosmological beliefs expressed in religious ceremonies and funerary practices continue to be evident in the culture of rural South Carolina and Florida African Americans, who are descendants of enslaved Africans. Before the 1920s, male and female figures carved in stone served as Kongo funerary monuments commemorating the accomplishments of the deceased. Kongo mortuary figures are noted for their seated postures, expressive gestures and details of jewelry and headwear that indicate the deceased’s status. The leopard claw hat is worn by male rulers and women acting as regents. European slave trade led to internal wars, enslavement of multitudes, introduction of major political upheavals, migrations, and power shifts from greater to lesser-centralized authority of Kongo and other African societies. Most notably the slave trade destroyed old lineages and kinship ties upon which the basis of social order and organization was maintained in African societies. Christianity in the Kingdom of Kongo The conversion of Kongo to Christianity was one of the more remarkable accomplishments of the early modern Catholic church. Within a few years of contact with the Portuguese, following a brief exchange of people, King Nzinga a Nkuwu of Kongo was baptized in 1491 as João I. His son Afonso (1509 – 1542) then established the church in the kingdom and created an educational network that trained the local nobility in Christian religious concepts, financing its operations and keeping it firmly under his control. During his reign, locally educated Kongolese elites carried the faith to literally every corner of his domains, so that when he died in 1542 it could rightfully be said that Kongo was a Christian country. Missionaries from Portugal played a remarkably small role in the propagation of Christianity, primarily being valued for their capacity to administer the sacraments, as this could only be done by ordained priests. Afonso expected even this dependency upon a foreign clergy to end, and Rome cooperated with him by elevating his son, Henrique, to the status of bishop. However, this did not produce a long-lasting tradition of local ordination. In 1534, the Portuguese crown claimed the right to appoint bishops for Kongo, and subsequently kept the numbers of priests low, while failing to promote significant numbers of Kongolese to holy orders. Thus, Kongo was in the interesting position for most of its history as a Christian kingdom of hosting foreign priests primarily to administer the sacraments while keeping lay people in charge of Christian education throughout the realm. The Jesuits became involved in Kongo shortly after Afonso’s death, with a mission that began in 1548. Afonso’s successor Diogo I (1545 – 61) sent a Kongolese man of whole or partial Portuguese descent named Diogo Gomes, educated in Kongo’s school system, as an ambassador to Portugal to request missionaries. Gomes contacted the Jesuits, accompanied them to Kongo, and then joined the order himself, taking the name Cornélio Gomes. He was probably responsible for the linguistic content of the first Kikongo catechism (a summary of church teachings), published in 1556 (but no longer extant). The catechism likely included the linguistic equations between Christianity and local religion that would characterize Kongo’s own interpretation of the faith. The mission ran into political difficulties with Diogo over matters of precedence and some local customs. It lasted only a few years. When they came with the colonial mission of Paulo Dias de Novais in 1575, the Jesuits played a key role in the evangelization of the Portuguese colony of Angola and its surrounding Kimbundu-speaking neighbors. Their experience is an example of evangelization in a colonial setting in Africa, and it contrasts with Jesuit approaches to conversion in the neighboring and independent Kingdom of Kongo. They drew heavily on previous experiences in the Kingdom of Kongo, which had itself become Christian a century earlier and pioneered a marriage between African religion and Christian spirituality. When Jesuits came to Kongo in 1548 they found an existing established church and added relatively little to it before they left following political disputes. When Dias de Novais came to found Angola, he initially was militarily dependent on Kongo’s assistance and the Jesuits, too, were dependent on the Kongolese version of Christianity, which is clear in their choice of vocabulary in the Kimbundu catechism that they sponsored and oversaw in 1628. However, the colonial situation in Angola made the Jesuits more willing to accept the idea of conversion by the sword, and they were notably less tolerant of African religious inclusions in Angola than in Kongo. It was from contact with Kongo’s southern neighbor of Ndongo that the Jesuits would come both as missionaries to non-Christians and as part of the Portuguese conquest, but their engagement was always tempered by contact with Kongo. Engagement with Ndongo began in 1560 when Portugal dispatched Paulo Dias de Novais and four Jesuits to the kingdom in response to King Ngola Kiluanje’s request for missionaries. The mission did not make much progress and Dias de Novais soon returned to Portugal, leaving only the Jesuit Francisco de Gouveia to labor on in Ndongo, where he made some converts and established a small community of Christians. While Gouveia enjoyed considerable influence, he never managed to convert the king. When Dias de Novais returned to Angola in 1575, it was with an army, more Jesuits, and a charter to subjugate and to conquer the Kingdom of Angola. Kongo would play an important role in the initial conquest of Angola, for Dias de Novais’s mission had begun largely because Kongo’s king Álvaro I (1568 – 87) agreed to allow Portugal to use his territory at Luanda as a base in compensation for the help Portugal had given him in quelling an uprising by a mysterious group of people called “Jagas.” In addition to relying on Kongo for a base, Novais offered services to Kiluanje kia Ndambi, King of Ndongo, as mercenaries and assisted him in putting down rebellions of his own. But in 1579, upon hearing of Dias de Novais’s charter and commission to conquer Angola, Ndongo’s king expelled the Portuguese from his lands. In the aftermath of this disaster, the Portuguese fell back on their alliance with Kongo, but Kongo then retracted its official support of the Portuguese colony following the defeat of the Kongolese army by Ndongo in 1580. However, Kongo continued to play an important role for some time both in Portuguese politics in Angola and in the way in which Christianity developed there. Even without official support from the king, many Kongolese noblemen privately helped the Portuguese. According to one report of 1588, some 4,000 Kongolese were serving in the Portuguese forces, and Andrew Battell, a captive Englishman serving in Portuguese forces around 1600, noted that it was a regular practice to bring a Kongolese nobleman to come with a troop of soldiers and to serve as an organizer for the new Christian community, as well as to be an intermediary between the surrendered Mbundu lord and the Portuguese assigned to collect tribute from him. In addition to receiving assistance from these noble allies, Dias de Novais also built support by taking in disgruntled local rulers from the fringe areas of Ndongo’s control along the Kwanza and Bengo Rivers. The Jesuits successfully converted some of these local rulers to Christianity. Conversion was considered a step toward their becoming Portuguese vassals, a change of status that was required by Portugal both for the allied and the conquered in the region. Baltazar Barreira, one of the leading Jesuits of the mission, described the baptism in 1581 of the first of these allied nobles, named Songa, as occurring in a large ceremony that was conducted by the Jesuits with great pomp and which included a number of Kongolese participants. In addition to an installation ceremony, there was also a ritualistic burning of country “idols.” These ceremonious conversions, which were made quickly and with considerable political expediency, characterized the advance of Portuguese rule in Angola. If military aid from Kongo was important, the country played an even more substantial role in the Christian evangelization of Angola. The dependency of the Angola mission on Kongo was symbolically marked in 1596 when Rome elevated Kongo’s capital of São Salvador as the seat of the bishop of Kongo and Angola, placing the nascent Angolan church under the nominal control of Kongo, where the bishop’s cathedral was located. Kongo’s ascendancy in religious matters was more symbolic than real, and Portugal claimed the right to appoint bishops of this new see. This joint alliance of the Jesuits with both the Kongo church and Kongo’s military was sharply challenged in the early seventeenth century. Thanks to some key alliances, Portugal was able to recover the military initiative and made major conquests in Ndongo, driving the king from the capital and forcing him to come to terms. But in the process they also made incursions into Kongo and in 1622 launched a major, but unsuccessful, invasion. From that point onward, Kongo became the sworn enemy of Portugal and formal ecclesiastical relations were strained to a breaking point. The new estrangement between Kongo and Angola meant that the Angolan church would not benefit from Kongo’s long-established network of schools and schoolmasters who led theological instruction in every corner of the country, just at the time when the Portuguese were conquering territory that was far from the area around Kongo’s coastal land of Luanda. The Jesuits, along with Portuguese secular priests, were thus responsible for building a network themselves in Angola, and they never won the sort of general adherence to Christianity found in Kongo. Despite the substantial differences in their political situations, missionaries both in Kongo and the Kimbundu-speaking areas of Angola developed Christian theologies that essentially incorporated large components of indigenous spirituality. These two syncretic systems could then potentially have been translated into other African religious systems and carried across the Atlantic, where so many Central Africans served as catechists. The role of Kongolese clergy and lay catechists in developing a syncretic form of Christianity in conquered Angola may just as well have served the same purpose in the America of the slaves. The conversion of Angolans had its reflection in the religion of slaves in Brazil. The same wave of Portuguese conquest and colonization that had led to the formation of Kimbundu Christianity also brought thousands of slaves to Brazil; there, in the most successful of the sixteenth-century captaincies of Bahia and Pernambuco, the Jesuits took the lead not only in converting the indigenous Brazilians but also the African slaves who came among them. In this, they employed the language of their early catechisms. Jesuits in Pernambuco, for example, studied Kimbundu and learned to read and even to compose in the language, as sixteenth-century sources reveal. Both Christian and traditional religious ideas and practices crossed the Atlantic with these slaves. Queen Ana Nzinga African peoples fiercely resisted Portuguese expansion under the leadership of the Queen Ana Nzinga. In 1624, she inherited the throne of Ndongo, just to the east of the Portuguese colony of Luanda. To put a stop to slave raids in her kingdom and attacks by rival African states on her kingdom, she agreed to an alliance with the Portuguese and to become baptized as a Christian in 1626. When the slave raids continued and the Portuguese reneged on this alliance, Ana Nzinga and her supporters migrated into the African interior away from Portuguese control and formed the new kingdom of Matamba. The new kingdom waged war on the Portuguese and became a haven for runaway slaves. In 1641, the queen even formed an alliance with the Dutch, whose forces conquered and briefly occupied Luanda before the Portuguese were able to recover and retake the colony. Ironically by the time of her death in 1663, the queen was rebaptized as a Christian, while her kingdom, Matamba became a major supplier of slaves to the Portuguese by securing slaves from outlying regions. Ana Nzinga’s life story illustrates that indigenous Africans were not simply passive agents in the African slave trade. Kingdoms of Madagascar On the island of Madagascar off the east coast of Africa, a number of states emerged that were heavily involved in the slave trade. Among the many fragmented communities that populated Madagascar, the Sakalava, Merina, and Betsimisaraka seized the opportunity to unite disparate groups and establish powerful kingdoms under their rule. Diverse Populations and the Rise of Great Kingdoms Over the past 2,000 years, Madagascar has received waves of settlers of diverse origins, including Austronesian, Bantu, Arab, South Asian, Chinese, and European populations. Centuries of intermarriages created the Malagasy people, who primarily speak Malagasy, an Austronesian language with Bantu, Malay, Arabic, French, and English influences. Most of the genetic makeup of the average Malagasy, however, reflects an almost equal blend of Austronesian and Bantu influences, especially in coastal regions. Other populations often intermixed with the existent population to a more limited degree or have sought to preserve a separate community from the majority Malagasy. By the European Middle Ages c 1200 CE, over a dozen predominant ethnic identities had emerged on the island, typified by rule under a local chieftain. Leaders of some communities, such as the Sakalava, Merina, and Betsimisaraka, seized the opportunity to unite these disparate groups and establish powerful kingdoms under their rule. The kingdoms increased their wealth and power through exchanges with European, Arab, and other seafaring traders, whether they were legitimate vessels or pirates. Sakalava Madagascar’s western clan chiefs began to extend their power through trade with their Indian Ocean neighbors: first with Arab, Persian, and Somali traders who connected Madagascar with East Africa, the Middle East, and India; later with European slave traders. The wealth created in Madagascar through trade produced a state system ruled by powerful regional monarchs known as the Maroserana. These monarchs adopted the cultural traditions of subjects in their territories and, thereby, expanded their kingdoms. They took on divine status, and new nobility and artisan classes were created. Madagascar functioned as a contact port for the other Swahili seaport city-states, such as Sofala, Kilwa, Mombasa, and Zanzibar. By c. 1200 CE, large chiefdoms began to dominate considerable areas of the island. Among these were the Betsimisaraka alliance of the eastern coast and the Sakalava chiefdoms of the Menabe (centered in what is now the town of Morondava) and of the Boina (centered in what is now the provincial capital of Mahajanga). The influence of the Sakalava extended across the area that is now the provinces of Antsiranana, Mahajanga, and Toliara. According to local tradition, the founders of the Sakalava kingdom were Maroseraña—or Maroseranana, “those who owned many ports”—princes from the Fiherenana (now Toliara). They quickly subdued the neighboring princes, starting with the southern ones, in the Mahafaly area. The true founder of Sakalava dominance was Andriamisara. His son Andriandahifotsy (c. 1610 – 1658) extended his authority northwards, past the Mangoky River. His two sons, Andriamanetiarivo and Andriamandisoarivo, extended gains further up to the Tsongay region (now Mahajanga). At about that time, the empire started to split, resulting in a southern kingdom (Menabe) and a northern kingdom (Boina). Further splits followed, despite continued extension of the Boina princes’ reach into the extreme north, in Antankarana country. Betsmiraka Like the Sakalava to the west, today’s Betsimisaraka are composed of numerous ethnic sub-groups that formed a confederation in the early 18th century. Through the late 17th century, the various clans of the eastern seaboard were governed by chieftains who typically ruled over one or two villages. Around 1700, the Tsikoa clans began uniting around a series of powerful leaders. Ramanano, the chief of Vatomandry, was elected in 1710 as the leader of the Tsikoa—“those who are steadfast”—and initiated invasions of the northern ports. A northern Betsimisaraka zana-malata (a person of mixed native and European origin) named Ratsimilaho led a resistance to these invasions and successfully united his compatriots around this cause. In 1712, he forced the Tsikoa to flee, and was elected king of all the Betsimisaraka and, at his capital at Foulpointe, was given a new name: Ramaromanompo—“Lord Served by Many.” He established alliances with the southern Betsimisaraka and the neighboring Bezanozano, extending his authority over these areas by allowing local chiefs to maintain their power while offering tributes of rice, cattle, and slaves. By 1730, he was one of the most powerful kings of Madagascar. By the time of his death in 1754, his moderate and stabilizing rule had provided nearly forty years of unity among the diverse clans within the Betsimisaraka political union. He also allied the Betsimisaraka with the other most powerful kingdom of the time, the Sakalava of the west coast, through marriage with Matave, the only daughter of Iboina king Andrianbaba. Ratsimilaho’s successors gradually weakened the union, leaving it vulnerable to the growing influence and presence of European and particularly French settlers, slave traders, missionaries, and merchants. The fractured Betsimisaraka kingdom was easily colonized in 1817 by Radama I, king of Merina. The subjugation of the Betsimisaraka in the 19th century left the population relatively impoverished. Merina The Merina emerged as the politically dominant group over the course of the 17th and 18th centuries. Oral history traces the emergence of a united kingdom in the central highlands of Madagascar—a region called Imerina—back to early 16th century king Andriamanelo. By 1824, sovereigns in his line had conquered nearly all of Madagascar, particularly through the military strategy and ambitious political policies of Andrianampoinimerina (c. 1785 – 1810) and his son Radama I (1792 – 1828). The kingdom’s contact with British and later French powers led local leaders to build schools and a modern army based on European models. The Merina oral histories mention several attacks by Sakalava raiders against their villages as early as the 17th century and during the entire 18th century. However, it seems that the term was used generically to design all the nomadic peoples in the sparsely settled territories between the Merina country and the western coast of the island. The Merina king Radama I’s wars with the western coast of the island ended in a fragile peace sealed through his marriage with the daughter of a king of Menabe. Though the Merina were never to annex the two last Sakalava strongholds of Menabe and Boina (Mahajanga), the Sakalava never again posed a threat to the central plateau, which remained under Merina control until the French colonization of the island in 1896. The Merina kingdom reached the peak of its power in the early 19th century. In a number of military expeditions, large numbers of non-Merina were captured and used for slave labor. By the 1850s, these slaves were replaced by imported slaves from East Africa, mostly of Makoa ethnicity. Until the 1820s, the imported slave labor benefited all classes of Merina society, but in the period of 1825 to 1861 a general impoverishment of small farmers led to the concentration of slave ownership in the hands of the ruling elite. The slave-based economy led to a constant danger of a slave revolt, and for a period in the 1820s all non-Merina males captured in military expeditions were killed rather than enslaved for fear of an armed uprising. There was a brief period of increased prosperity in the late 1870s, as slave imports began to pick up again, but it was cut short with the abolishment of slavery under French administration in 1896. Due to the influence of British missionaries, the Merina upper classes converted entirely to Protestantism in the mid-19th century, following the example of their queen, Ranavalona II. Primary Sources The African Slave Trade The three following primary sources offer insights on the experiences of enslaved African peoples when the Transatlantic slave trade was in operation. John Barbot John Barbot, an agent for the French Royal African Company, made at least two voyages to the West Coast of Africa, in 1678 and 1682. "PREPOSSESSED OF THE OPINION...THAT EUROPEANS ARE FOND OF THEIR FLESH" By John Barbot Those sold by the Blacks are for the most part prisoners of war, taken either in fight, or pursuit, or in the incursions they make into their enemies territories; others stolen away by their own countrymen; and some there are, who will sell their own children, kindred, or neighbours. This has been often seen, and to compass it, they desire the person they intend to sell, to help them in carrying something to the factory by way of trade, and when there, the person so deluded, not understanding the language, is old and deliver'd up as a slave, notwithstanding all his resistance, and exclaiming against the treachery.... The kings are so absolute, that upon any slight pretense of offences committed by their subjects, they order them to be sold for slaves, without regard to rank, or possession.... Abundance of little Blacks of both sexes are also stolen away by their neighbours, when found abroad on the roads, or in the woods; or else in the Cougans, or corn- fields, at the time of the year, when their parents keep them there all day, to scare away the devouring small birds, that come to feed on the millet, in swarms, as has been said above. In times of dearth and famine, abundance of those people will sell themselves, for a maintenance, and to prevent starving. When I first arriv'd at Goerree, in December, 1681, I could have bought a great number, at very easy rates, if I could have found provisions to subsist them; so great was the dearth then, in that part of Nigritia. To conclude, some slaves are also brought to these Blacks, from very remote inland countries, by way of trade, and sold for things of very inconsiderable value; but these slaves are generally poor and weak, by reason of the barbarous usage they have had in traveling so far, being continually beaten, and almost famish'd; so inhuman are the Blacks to one another.... The trade of slaves is in a more peculiar manner the business of kings, rich men, and prime merchants, exclusive of the inferior sort of Blacks. These slaves are severely and barbarously treated by their masters, who subsist them poorly, and beat them inhumanly, as may be seen by the scabs and wounds on the bodies of many of them when sold to us. They scarce allow them the least rag to cover their nakedness, which they also take off from them when sold to Europeans; and they always go bare- headed. The wives and children of slaves, are also slaves to the master under whom they are married; and when dead, they never bury them, but cast out the bodies into some by place, to be devoured by birds, or beasts of prey. This barbarous usage of those unfortunate wretches, makes it appear, that the fate of such as are bought and transported from the coast to America, or other parts of the world, by Europeans, is less deplorable, than that of those who end their days in their native country; for aboard ships all possible care is taken to preserve and subsist them for the interest of the owners, and when sold in America, the same motive ought to prevail with their masters to use them well, that they may live the longer, and do them more service. Not to mention the inestimable advantage they may reap, of becoming christians, and saving their souls, if they make a true use of their condition.... Many of those slaves we transport from Guinea to America are prepossessed with the opinion, that they are carried like sheep to the slaughter, and that the Europeans are fond of their flesh; which notion so far prevails with some, as to make them fall into a deep melancholy and despair, and to refuse all sustenance, tho' never so much compelled and even beaten to oblige them to take some nourishment: notwithstanding all which, they will starve to death; whereof I have had several instances in my own slaves both aboard and at Guadalupe. And tho' I must say I am naturally compassionate, yet have I been necessitated sometimes to cause the teeth of those wretches to be broken, because they would not open their mouths, or be prevailed upon by any entreaties to feed themselves; and thus have forced some sustenance into their throats.... As the slaves come down to Fida from the inland country, they are put into a booth, or prison, built for that purpose, near the beach, all of them together; and when the Europeans are to receive them, every part of every one of them, to the smallest member, men and women being all stark naked. Such as are allowed good and sound, are set on one side, and the others by themselves; which slaves so rejected are there called Mackrons, being above thirty five years of age, or defective in their limbs, eyes or teeth; or grown grey, or that have the venereal disease, or any other imperfection. These being set aside, each of the others, which have passed as good, is marked on the breast, with a red- hot iron, imprinting the mark of the French, English, or Dutch companies, that so each nation may distinguish their own, and to prevent their being chang'd by the natives for worse, as they are apt enough to do. In this particular, care is taken that the women, as tenderest, be not burnt too hard. The branded slaves, after this, are returned to their former booth, where the factor is to subsist them at his own charge, which amounts to about two- pence a day for each of them, with bread and water, which is all their allowance. There they continue sometimes ten or fifteen days, till the sea is still enough to send them aboard; for very often it continues too boisterous for so long a time, unless in January, February and March, which is commonly the calmest season: and when it is so, the slaves are carried off by parcels, in bar- canoes, and put aboard the ships in the road. Before they enter the canoes, or come out of the booth, their former Black masters strip them of every rag they have, without distinction of men or women; to supply which, in orderly ships, each of them as they come aboard is allowed a piece of canvas, to wrap around their waist, which is very acceptable to those poor wretches.... If there happens to be no stock of slaves at Fida, the factor must trust the Blacks with his goods, to the value of a hundred and fifty, or two hundred slaves; which goods they carry up into the inland, to buy slaves, at all the markets, for above two hundred leagues up the country, where they are kept like cattle in Europe; the slaves sold there being generally prisoners of war, taken from their enemies, like other booty, and perhaps some few sold by their own countrymen, in extreme want, or upon a famine; as also some as a punishment of heinous crimes: tho' many Europeans believe that parents sell their own children, men their wives and relations, which, if it ever happens, is so seldom, that it cannot justly be charged upon a whole nation, as a custom and common practice.... One thing is to be taken notice of by sea- faring men, that this Fida and Ardra slaves are of all the others, the most apt to revolt aboard ships, by a conspiracy carried on amongst themselves; especially such as are brought down to Fida, from very remote inland countries, who easily draw others into their plot: for being used to see mens flesh eaten in their own country, and publick markets held for the purpose, they are very full of the notion, that we buy and transport them to the same purpose; and will therefore watch all opportunities to deliver themselves, by assaulting a ship's crew, and murdering them all, if possible: whereof, we have almost every year some instances, in one European ship or other, that is filled with slaves. Source: John Barbot, "A Description of the Coasts of North and South Guinea," in Thomas Astley and John Churchill, eds., Collection of Voyages and Travels (London, 1732). Olaudah Equiano (Gustavus Vassa) Olaudah Equiano also known as Gustavus Vassa vividly recounts the shock and isolation that he felt during the Middle Passage to Barbados and his fear that the European slavers would eat him. "A MULTITUDE OF BLACK PEOPLE...CHAINED TOGETHER" Their complexions, differing so much from ours, their long hair and the language they spoke, which was different from any I had ever heard, united to confirm me in this belief. Indeed, such were the horrors of my views and fears at the moment, that if ten thousand worlds had been my own, I would have freely parted with them all to have exchanged my condition with that of the meanest slave of my own country. When I looked around the ship and saw a large furnace of copper boiling, and a multitude of black people of every description chained together, every one of their countenances expressing dejection and sorrow, I no longer doubted my fate. Quite overpowered with horror and anguish, I fell motionless on the deck and fainted. When I recovered a little, I found some black people about me, and I believe some were those who had brought me on board and had been receiving their pay. They talked to me in order to cheer me up, but all in vain. I asked them if we were not to be eaten by those white men with horrible looks, red faces and long hair. They told me I was not. I took a little down my palate, which, instead of reviving me as they thought it would, threw me into the greatest consternation at the strange feeling it produced, having never tasted such liquor before. Soon after this, the blacks who had brought me on board went off and left me abandoned to despair. I now saw myself deprived of all chance of returning to my native country or even the least glimpse of hope of gaining the shore, which I now considered as friendly. I even wished for my former slavery in preference to my present situation, which was filled with horrors of every kind. There I received such a salutation in my nostrils as I had never experienced in my life. With the loathesomeness of the stench and the crying together, I became so sick and low that I was not able to eat, nor had I the least desire to taste anything. I now wished for the last friend, Death, to relieve me. Soon, to my grief, two of the white men offered me eatables and on my refusing to eat, one of them held me fast by the hands and laid me across the windlass and tied my feet while the other flogged me severely. I had never experienced anything of this kind before. If I could have gotten over the nettings, I would have jumped over the side, but I could not. The crew used to watch very closely those of us who were not chained down to the decks, lest we should leap into the water. I have seen some of these poor African prisoners most severely cut for attempting to do so, and hourly whipped for not eating. This indeed was often the case with myself. I inquired of these what was to be done with us. They gave me to understand we were to be carried to these white people's country to work for them. I then was a little revived, and thought if it were no worse than working, my situation was not so desperate. But still I feared that I should be put to death, the white people looked and acted in so savage a manner. I have never seen among my people such instances of brutal cruelty, and this not only shown towards us blacks, but also to some of the whites themselves. One white man in particular I saw, when we were permitted to be on deck, flogged so unmercifully with a large rope near the foremast that he died in consequence of it, and they tossed him over the side as they would have done a brute. This made me fear these people the more, and I expected nothing less than to be treated in the same manner. I asked them if these people had no country, but lived in this hollow place? They told me they did not but came from a distant land. "Then," said I, "how comes it that in all our country we never heard of them?" They told me because they lived so far off. I then asked where were their women? Had they any like themselves? I was told they had. "And why do we not see them" I asked. They answered, "Because they were left behind." I asked how the vessel would go? They told me they could not tell, but there was cloth put upon the masts by the help of the ropes I saw, and then vessels went on, and the white men had some spell or magic they put in the water when they liked in order to stop the vessel when they liked. I was exceedingly amazed at this account, and really thought they were spirits. I therefore wished much to be from amongst them, for I expected they would sacrifice me. But my wishes were in vain- - for we were so quartered that it was impossible for us to make our escape. At last, when the ship we were in had got in all her cargo, they made ready with many fearful noises, and we were all put under deck, so that we could not see how they managed the vessel. The stench of the hold while we were on the coast was so intolerably loathsome, that it was dangerous to remain there for any time...some of us had been permitted to stay on the deck for the fresh air. But now that the whole ship's cargo were confined together, it became absolutely pestilential. The closeness of the place and the heat of the climate, added to the number of the ship, which was so crowded that each had scarcely room to turn himself, almost suffocated us. This produced copious perspirations so that the air became unfit for respiration from a variety of loathsome smells, and brought on a sickness among the slaves, of which many died- - thus falling victims of the improvident avarice, as I may call it, of their purchasers. This wretched situation was again aggravated by the galling of the chains, which now became insupportable, and the filth of the necessary tubs [toilets] into which the children often fell and were almost suffocated. The shrieks of the women and the groans of the dying rendered the whole a scene of horror almost inconceivable. Happily perhaps for myself, I was soon reduced so low that it was necessary to keep me almost always on deck and from my extreme youth I was not put into fetters. In this situation I expected every hour to share the fate of my companions, some of whom were almost daily brought upon the deck at the point of death, which I began to hope would soon put an end to my miseries. Often did I think many of the inhabitants of the deep much more happy than myself. I envied them the freedom they enjoyed, and as often wished I could change my condition for theirs. Every circumstance I met with, served only to render my state more painful and heightened my apprehensions and my opinion of the cruelty of the whites. One day, when we had a smooth sea and moderate wind, two of my wearied countrymen who were chained together (I was near them at the time), preferring death to such a life of misery, somehow made through the nettings and jumped into the sea. Immediately another quite dejected fellow, who on account of his illness was suffered to be out of irons, followed their example. I believe many more would very soon have done the same if they had not been prevented by the ship's crew, who were instantly alarmed. Those of us that were the most active were in a moment put down under the deck, and there was such a noise and confusion among the people of the ship as I never heard before to stop her and get the boat out to go after the slaves. However, two of the wretches were drowned, but they got the other and afterwards flogged him unmercifully for thus attempting to prefer death to slavery. I can now relate hardships which are inseparable from this accursed trade. Many a time we were near suffocation from the want of fresh air, which we were often without for whole days together. This, and the stench of the necessary tubs, carried off many. Source: The Interesting Narrative of the Life of Olaudah Equiano or Gustavus Vassa the African (London, 1789). Alexander Falconbridge Alexander Falconbridge, a surgeon aboard slave ships and later the governor of a British colony for freed slaves in Sierra Leone, offers a vivid account of Middle Passage "THE MEN NEGROES...ARE...FASTENED TOGETHER...BY HANDCUFFS" From the time of the arrival of the ships to their departure, which is usually about three months, scarce a day passes without some Negroes being purchased and carried on board; sometimes in small and sometimes in large numbers. The whole number taken on board depends on circumstances. In a voyage I once made, our stock of merchandise was exhausted in the purchase of about 380 Negroes, which was expected to have procured 500... The unhappy wretches thus disposed of are bought by the black traders at fairs, which are held for that purpose, at the distance of upwards of two hundred miles from the sea coast; and these fairs are said to be supplied from an interior part of the country. Many Negroes, upon being questioned relative to the places of their nativity, have asserted that they have travelled during the revolution of several moons (their usual method of calculating time) before they have reached the places where they were purchased by the black traders. At these fairs, which are held at uncertain periods, but generally every six weeks, several thousands are frequently exposed to sale who had been collected from all parts of the country for a very considerable distance around....During one of my voyages, the black traders brought down, in different canoes, from twelve to fifteen hundred Negroes who had been purchased at one fair. They consisted chiefly of men and boys, the women seldom exceeding a third of the whole number. From forty to two hundred Negroes are generally purchased at a time by the black traders, according to the opulence of the buyer, and consist of all ages, from a month to sixty years and upwards. Scarcely any age or situation is deemed an exception, the price being proportionable. Women sometimes form a part of them, who happen to be so far advanced in their pregnancy as to be delivered during their journey from the fairs to the coast; and I have frequently seen instances of deliveries on board ship.... When the Negroes, whom the black traders have to dispose of, are shown to the European purchasers, they first examine them relative to their age. They then minutely inspect their persons and inquire into the state of their health; if they are inflicted with any disease or are deformed or have bad eyes or teeth; if they are lame or weak in the joints or distorted in the back or of a slender make or narrow in the chest; in short, if they have been ill or are afflicted in any manner so as to render them incapable of much labor. If any of the foregoing defects are discovered in them they are rejected. But if approved of, they are generally taken on board the ship the same evening. The purchaser has liberty to return on the following morning, but not afterwards, such as upon re- examination are found exceptionable.... Near the mainmast a partition is constructed of boards which reaches athwart the ship. This division is called a barricado. It is about eight feet in height and is made to project about two feet over the sides of the ship. In this barricado there is a door at which a sentinel is placed during the time the Negroes are permitted to come upon the deck. It serves to keep the different sexes apart; and as there are small holes in it, where blunderbusses are fixed and sometimes a cannon, it is found very convenient for quelling the insurrections that now and then happen.... The men Negroes, on being brought aboard the ship, are immediately fastened together, two and two, by handcuffs on their wrists and by irons riveted on their legs. They are then sent down between the decks and placed in an apartment partitioned off for that purpose. The women also are placed in a separate apartment between the decks, but without being ironed. An adjoining room on the same deck is appointed for the boys. Thus they are all placed in different apartments. But at the same time, however, they are frequently stowed so close, as to admit of no other position than lying on their sides. Nor with the height between decks, unless directly under the grating, permit the indulgence of an erect posture; especially where there are platforms, which is generally the case. These platforms are a kind of shelf, about eight or nine feet in breadth, extending from the side of the ship toward the centre. They are placed nearly midway between the decks, at the distance of two or three feet from each deck. Upon these the Negroes are stowed in the same manner as they are on the deck underneath. In each of the apartments are placed three or four large buckets, of a conical form, nearly two feet in diameter at the bottom and only one foot at the top and in depth of about twenty- eight inches, to which, when necessary, the Negroes have recourse. It often happens that those who are placed at a distance from the buckets, in endeavoring to get to them, tumble over their companions, in consequence of their being shackled. These accidents, although unavoidable, are productive of continual quarrels in which some of them are always bruised. In this distressed situation, unable to proceed and prevented from getting to the tubs, they desist from the attempt; and as the necessities of nature are not to be resisted, ease themselves as they lie. This becomes a fresh source of boils and disturbances and tends to render the condition of the poor captive wretches still more uncomfortable. The nuisance arising from these circumstances is not infrequently increased by the tubs being too small for the purpose intended and their being emptied but once every day. The rule for doing so, however, varies in different ships according to the attention paid to the health and convenience of the slaves by the captain. About eight o'clock in the morning the Negroes are generally brought upon deck. Their irons being examined, a long chain, which is locked to a ring- bolt fixed in the deck, is run through the rings of the shackles of the men and then locked to another ring- bolt fixed also in the deck. By this means fifty or sixty and sometimes more are fastened to one chain in order to prevent them from rising or endeavoring to escape. If the weather proves favorable they are permitted to remain in that situation till four or five in the afternoon when they are disengaged from the chain and sent below. The diet of the Negroes while on board, consists chiefly of horse beans boiled to the consistency of a pulp; of boiled yams and rice and sometimes a small quantity of beef or pork. The latter are frequently taken from the provisions laid in for the sailors. They sometimes make use of a sauce composed of palm- oil mixed with flour, water and pepper, which the sailors call slabber- sauce. Yams are the favorite food of the Eboe [Ibo] or Bight Negroes, and rice or corn of those from the Gold or Windward Coast; each preferring the produce of their native soil.... They are commonly fed twice a day; about eight o'clock in the morning and four in the afternoon. In most ships they are only fed with their own food once a day. Their food is served up to them in tubs about the size of a small water bucket. They are placed round these tubs, in companies of ten to each tub, out of which they feed themselves with wooden spoons. These they soon lose and when they are not allowed others they feed themselves with their hands. In favorable weather they are fed upon deck but in bad weather their food is given them below. Numberless quarrels take place among them during their meals; more especially when they are put upon short allowance, which frequently happens if the passage form the coast of Guinea to the West Indies islands proves of unusual length. In that case, the weak are obliged to be content with a very scanty portion. Their allowance of water is about half a pint each at every meal. It is handed round in a bucket and given to each Negro in a pannekin, a small utensil with a straight handle, somewhat similar to a sauce- boat. However, when the ships approach the islands with a favourable breeze, the slaves are no longer restricted. Upon the Negroes refusing to take sustenance, I have seen coals of fire, glowing hot, put on a shovel and placed so near their lips as to scorch and burn them. And this has been accompanied with threats of forcing them to swallow the coals if they any longer persisted in refusing to eat. These means have generally had the desired effect. I have also been credibly informed that a certain captain in the slave- trade, poured melted lead on such of his Negroes as obstinately refused their food. Exercise being deemed necessary for the preservation of their health they are sometimes obliged to dance when the weather will permit their coming on deck. If they go about it reluctantly or do not move with agility, they are flogged; a person standing by them all the time with a cat- o'- nine- tails in his hands for the purpose. Their music, upon these occasions, consists of a drum, sometimes with only one head; and when that is worn out they make use of the bottom of one of the tubs before described. The poor wretches are frequently compelled to sing also; but when they do so, their songs are generally, as may naturally be expected, melancholy lamentations of their exile from their native country. The women are furnished with beads for the purpose of affording them some diversion. But this end is generally defeated by the squabbles which are occasioned in consequence of their stealing from each other. On board some ships the common sailors are allowed to have intercourse with such of the black women whose consent they can procure. And some of them have been known to take the inconstancy of their paramours so much to heart as to leap overboard and drown themselves. The officers are permitted to indulge their passions among them at pleasure and sometimes are guilty of such excesses as disgrace human nature.... The hardships and inconveniences suffered by the Negroes during the passage are scarcely to be enumerated or conceived. They are far more violently affected by seasickness than Europeans. It frequently terminates in death, especially among the women. But the exclusion of fresh air is among the most intolerable. For the purpose of admitting this needful refreshment, most of the ships in the slave trade are provided, between the decks, with five or sick air- ports on each side of the ship of about five inches in length and four in breadth. In addition, some ships, but not one in twenty, have what they denominate wind- sails. But whenever the sea is rough and the rain heavy is becomes necessary to shut these and every other conveyance by which the air is admitted. The fresh air being thus excluded, the Negroes' rooms soon grow intolerable hot. The confined air, rendered noxious by the effluvia exhaled from their bodies and being repeatedly breathed, soon produces fevers and fluxes which generally carries of great numbers of them. During the voyages I made, I was frequently witness to the fatal effects of this exclusion of fresh air. I will give one instance, as it serves to convey some idea, though a very faint one, of their terrible sufferings....Some wet and blowing weather having occasioned the port- holes to be shut and the grating to be covered, fluxes and fevers among the Negroes ensued. While they were in this situation, I frequently went down among them till at length their room became so extremely hot as to be only bearable for a very short time. But the excessive heat was not the only thing that rendered their situation intolerable. The deck, that is the floor of their rooms, was so covered with the blood and mucus which had proceeded from them in consequence of the flux, that it resembled a slaughter- house. It is not in the power of the human imagination to picture a situation more dreadful or disgusting. Numbers of the slaves having fainted, they were carried upon deck where several of them died and the rest with great difficulty were restored.... As very few of the Negroes can so far brook the loss of their liberty and the hardships they endure, they are ever on the watch to take advantage of the least negligence in their oppressors. Insurrections are frequently the consequence; which are seldom expressed without much bloodshed. Sometimes these are successful and the whole ship's company is cut off. They are likewise always ready to seize every opportunity for committing some acts of desperation to free themselves from their miserable state and notwithstanding the restraints which are laid, they often succeed. Source: Alexander Falconbridge, An Account of the Slave Trade on the Coast of Africa (London, 1788). Attributions Title Image Title page of The Interesting Narrative of the Life of Olaudah Equiano, or Gustavus Vassa, the African (New York: W. Durrell, 1791) - Library Company of Philadelphia, No restrictions, via Wikimedia Commons Adapted from: https://courses.lumenlearning.com/atd-tcc-worldciv2/chapter/the-transatlantic-slave-trade-2/ https://creativecommons.org/licenses/by-nc-nd/4.0/ https://courses.lumenlearning.com/atd-tcc-worldciv2/chapter/the-transatlantic-slave-trade/ https://creativecommons.org/licenses/by-sa/4.0/ http://www.vgskole.net/prosjekt/slavrute/6.htm Public Domain compiled by Steven Mintz https://guides.hostos.cuny.edu/lac118/3-1 http://creativecommons.org/licenses/by-nc/4.0/ http://creativecommons.org/licenses/by-nc-sa/3.0/us/ https://courses.lumenlearning.com/boundless-worldhistory/chapter/west-african-empires/ https://creativecommons.org/licenses/by-sa/4.0/ https://callipedia.miraheze.org/wiki/Sokoto_Caliphate https://creativecommons.org/licenses/by-sa/4.0/ https://brill.com/view/journals/jjs/1/2/article-p245_6.xml?language=en http://creativecommons.org/licenses/by-nc/4.0 https://courses.lumenlearning.com/boundless-worldhistory/chapter/southern-african-states/ https://creativecommons.org/licenses/by-sa/4.0/ Alexander Ives Bortolot, "Women Leaders in African History: Ana Nzinga, Queen of Ndongo" https://www.metmuseum.org/toah/hd/pwmn_2/hd_pwmn_2.htm
oercommons
2025-03-18T00:35:08.849576
null
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87894/overview", "title": "Statewide Dual Credit World History, The Making of Early Modern World 1450-1700 CE", "author": null }
https://oercommons.org/courseware/lesson/100919/overview
الحرف اليدوية Overview يهدف الموقع الى التعرف على الحرف والصناعات اليدوية القديمة التي أبدع فيها الانسان اليمني من خلال ما تتوفر لدية من المواد الخام بحسب طبيعة كل منطقة من المناطق اليمنية. ويحتوي الموقع على تاريخ وتعريف وأهمية الحرف اليدوية وانواعها. الصفحة الرئيسية يهدف الموقع الى التعرف على الحرف والصناعات اليدوية القديمة التي أبدع فيها الانسان اليمني من خلال ما تتوفر لدية من المواد الخام بحسب طبيعة كل منطقة من المناطق اليمنية. ويحتوي الموقع على تاريخ وتعريف وأهمية الحرفا ليدوية وانواعها. إعداد/ غدير أمين أحمد القديمي إشراف أ.د / أنور عبدالعزيز الوحش المواضيع تاريخ الحرف اليدوية تعريف الحرف اليدوية أهمية الحرف اليدوية أنواع الحرف اليدوية إعداد/ غدير أمين أحمد القديمي إشراف أ.د/ أنور عبدالعزيز الوحش من نحن نحن طلبة قسم تكنولوجيا التعليم والمعلومات المستوى الثالث شعبة الحاسوب الدفعة العاشرة إعداد/ غدير أمين أحمد القديمي إشراف أ.د / أنور عبدالعزيز الوحش تاريخ الحرف اليدويه بدأت الحرف اليدوية الأولى التي صنعها البشر في فجر تطور البشرية، أنتجت بسبب الحاجة إلى الطعام، والحاجة إلى الأدوات التي كان من المقرر صياغتها من أجل البحث عن الفريسة، و الغريزة البدائية لنجاة ، منذ حوالي 2.6 مليون سنة ، اعتمد الإنسان المبكر على أشياء من بيئته ، مما أدى إلى إنشاء أول حرف يدوية على الإطلاق مثل الرماح والفؤوس التي استخدموها للبقاء على قيد الحياة ، وللصيد ، ولحماية أنفسهم في موائلهم البرية ، وقد بنوا أساسًا فنيًا إبداعيًا لتطوير الحرف اليدوية تعريف الحرف اليدوية تعريف الحرف اليدوية تُعرف الحرف اليدوية باسم الصناعات التقليدية وهي الصناعات التي تعتمد على اليد، أو أنها قد تعتمد على أدوات بسيطة فقط دون استعمال الآت حديثة، كما أنها من المهارات التي يمكن تعلمها وممارستها بحرفية عالية أهمية الحرف اليدوية أهمية الحرف اليدوية الأهمية الثقافية: تلعب الحرف اليدوية دورًا مهمًا للغاية في تمثيل ثقافة وتقاليد أي بلد أو منطقة ، الحرف اليدوية هي وسيلة مهمة للحفاظ على الفنون التقليدية الغنية والتراث والثقافة والمهارات والمواهب التقليدية المرتبطة بنمط حياة الناس وتاريخهم. الأهمية الاقتصادية: الحرف اليدوية مهمة للغاية من حيث التنمية الاقتصادية ، إنها توفر فرصًا كبيرة للتوظيف حتى مع استثمارات رأس المال المنخفضة وتصبح وسيلة بارزة للأرباح الأجنبية
oercommons
2025-03-18T00:35:08.877729
Syllabus
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/100919/overview", "title": "الحرف اليدوية", "author": "Lesson Plan" }
https://oercommons.org/courseware/lesson/103541/overview
Plane Kinetics of Rigid Bodies (Summary) Overview This is a summary of the plane kinetics of rigid bodies. Plane Kinetics of rigid bodies (Summary) This chart is a summary of the Rigid Bodies Kinetics (Dynamics)
oercommons
2025-03-18T00:35:08.895182
05/04/2023
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/103541/overview", "title": "Plane Kinetics of Rigid Bodies (Summary)", "author": "Farid Mahboubi Nasrekani" }
https://oercommons.org/courseware/lesson/103797/overview
USNH - IHE Accessibility in OER Implementation Guide Overview In this section, you and your team will engage in a Landscape Analysis to uncover key structures and supports that can guide your work to support Accessibility in OER. You may or may not answer all of these questions, but this is an offering. May 11 - Section One: Landscape Analysis for Accessibility in OER in Local Context (Work on during May 11th implementation) In this section, you and your team will engage in a Landscape Analysis to uncover key structures and supports that can guide your work to support Accessibility in OER. We exnourage to explore some of the questions from each category. You may or may not answer all of these questions, but this is an offering. We ask that you complete Parts One, Two and Six. Part One: Initial Thoughts Gather together from across the University System of New Hampshire to learn more about accessible OERs, meet fellow colleagues, and explore ways we can share expertise and learn together. Part Two: Introductory probing questions: What does accessibility look like in our organization? How do we measure accessibility? Each institution has its own approach to both questions, though we are all dedicated to accessibility. What does OER look like in our organization? How do we measure access to OER? Each institution is engaged in OERs in different ways. Part Six: Final Probing questions: What is our current goal for Accessibility in OER and why is that our goal? Group Conversation We revisited the summary from last week’s conversation: Our team had a lot crossover between creating OERs (a Good thing and helps students save money) and making sure they’re accessible at the outset to expand access and minimize issues BEFORE OERs are created and shared Who have we not yet included while thinking about this work? What barriers remain when considering this work? What would genuine change look like for our organization for this work? Section Two: Team Focus (Finish before May 25th to share during Implementation Session Two) Identifying and Describing a Problem of Practice The following questions should help your team ensure that you are focusing your collaboration. What is your Team’s specific goal for this series? Explore Cross-Systems Collaboration via an Online Community of Practice This series has provided colleagues from across the University System of New Hampshire with an opportunity to identify a shared experience related to OERs and accessibility. This is the first time many of us have met one another, and it has been a rewarding process. Toward that end, we are considering forming an online community of practice that includes representation from across all university system institutions (UNH, KSC, PSU). Please create a Focus Question that explains your goal and provides specific topics that you would like feedback on. This is what you will share in your breakout groups for feedback. What would an online community of practice around the topic of accessible OERs look like for those who wanted to be included? (Save for during May 25th's session.) What feedback did you receive from another team during the May 25th Implementation Session? Establish champions, take the temperature of campuses, create a meaningful process, model success, survey, share results, iterate. Section Three: Team Work Time and Next Steps (Complete by the end of Implementation Session Three) Sharing and Next Steps What was your redefined goal for this series? Explore Cross-Systems Collaboration via an Online Community of Practice This series has provided colleagues from across the University System of New Hampshire with an opportunity to identify a shared experience related to OERs and accessibility. This is the first time many of us have met one another, and it has been a rewarding process. Toward that end, we are considering forming an online community of practice that includes representation from across all university system institutions (UNH, KSC, PSU) where we can explore the following topics and collaborate on initiatives related to: Information that would be good to include in a resource (that is both accessible and an OER) that provides broad, general guidelines and tips about creating OERs (including accessibility as a prominent consideration at the outset). Ways to raise awareness among stakeholders about the importance of thinking about accessibility in OERs before creation or adoption What other partners might support this work and other topics related to OERs and accessibility emerge through collegial dialog? Please create a Focus Question that explains your goal and provides specific topics that you would like feedback on. This is what you will share in your breakout groups for feedback. What would an online community of practice around the topic of accessible OERs look like for those who wanted to be included? Plan: Explore Goal and Identify Key Steps: Hold virtual meetings quarterly (begin in late summer) Proposed structure: rotating facilitation, beginning with Julie for summer, then hand off to Scott for two meetings, then rotate? What does your team want to celebrate? New learning, new / stronger connections with colleagues, and an avenue for us to continue to create space to have dialog and explore accessibility and OERs What did your team accomplish? If you have links to resources, please include them here. What are your team’s next steps? We created a format outline for our community of practice: Summer and Fall 2023 meetings (Julie offered to facilitate) to learn, resource share and brainstorm; identify and prioritize potential system-wide collaborations / projects Summer meeting: consider sharing expertise and resources; develop purpose, goals, and possible project topics Fall meeting: invite faculty leading OER at each institution to present share overview and challenges (15 minutes each) Winter and Spring 2024 (new facilitator): Project development, implementation, evaluation Summer and Fall 2023: Review project evaluation; learn, resource share and brainstorm; identify and prioritize potential system-wide collaborations / projects Establish champions, take the temperature of campuses, create a meaningful process, model success, survey, share results, iterate
oercommons
2025-03-18T00:35:08.917344
Scott Lapinski
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/103797/overview", "title": "USNH - IHE Accessibility in OER Implementation Guide", "author": "Julie Moser" }
https://oercommons.org/courseware/lesson/98427/overview
Sign in to see your Hubs Sign in to see your Groups Create a standalone learning module, lesson, assignment, assessment or activity Submit OER from the web for review by our librarians Please log in to save materials. Log in Un module sur la biologie Un cours sur la biologie or
oercommons
2025-03-18T00:35:08.941375
11/02/2022
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/98427/overview", "title": "Cours sur la biologie", "author": "Souhad shlaka" }
https://oercommons.org/courseware/lesson/123354/overview
JSTOR Overview JSTOR is a digital library of journal, articles, books, images, primary resources which hepls for academic and research purposes. JSTOR https://www.jstor.org/ ARPITA SARKAR, Student of M.LIS, Jadavpur University About JSTOR JSTOR (for “Journal Storage”) collects scholarly (peer-reviewed) articles, especially in the Humanities and Social Sciences. JSTOR was conceived in 1994 by William G. Bowen. JSTOR provides access to more than 12 million journal, articles, books, images, and proimary resouces(pamplets, monographs, manuscripts etc.) in 75 disciplines. JSTOR is a part of ITHAKA, a nonprofit group that supports the academic sector in utilizing digital tools to maintain scholarly information and promote research education in environmentally responsible manners. source: https://www.jstor.org/screenshot is taken for education purpose only Purpose JSTOR's mission is to help the scholarly community take advantage of advances in information technologies. Access to JSTOR is usually granted through entities like universities, libraries, and research institutions. Although certain content can be accessed freely, complete access to the majority of resources necessitates a subscription, which is commonly provided by academic institutions. Main motive is ‘explore the world’s knowledge, cultures and ideas'. JSTOR provides access to more than 12 million jjournal, articles, books, images, and proimary resouces in 75 disciplines which are very helpful for research and teaching platforms. Basic search, advance search, image, browse, workshops is here. Search in JSTOR There are two search forms on JSTOR.org, a Basic Search (on the main page at http://www.jstor.org/ and at the top of most pages) and an Advanced Search (www.jstor.org/action/showAdvancedSearch). Article, journal can be downloaded, saved, citation (MLA, APA, Chicago) also be here, and can be exported. For example: source: https://www.jstor.org/screenshot is taken for education purpose only Basic Search in JSTOR Searching by field User can search by title, author, journal name, publication name, ISSN etc. with citation and field operators, search query by its abbreviation mentioned bellow. Jo: journal name ti: article title au: article author ca: article captions ty:fla full length article sn: ISSN vo: journal volume field ty:bry book reviews For example: Search by Everyday Life Information Seeking and shows 98752 results. It can be shorted by relevance, Newest, Oldest. The result will refined by content type(Academic content like journals, book chapters, research reports) Primary source content (serials, book, documents, images, etc.), by date, subjects, languages and by access type (Everything which are open or paid and Content I can Access which are free). source: https://www.jstor.org/screenshot is taken for education purpose only Advance searching JSTOR supports advance search techniques to improve the accuracy and relevancy of search. source: https://www.jstor.org/screenshot is taken for education purpose only Boolean operators (AND, OR, NOT): Use Boolean operator for better search with access type. For example: “tea trade” AND “coffee trade” “Birds” OR “Butterfly” “United States” NOT “United Kingdom” Combining search terms Dropbox of Boolean, here is combining search under the Boolean operator (AND/OR/NOT) NEAR 5/10/25. The NEAR operator looks for the combinations of keywords within 5, 10, or 25 words places of each other but it only works on single keyword combinations. For example: Dog Near 5 Cat. Narrow result Search by item type, language, publication date, journal Or book title. source: https://www.jstor.org/screenshot is taken for education purpose only Phrase Search For exact search, more than one term in a field search, use parentheses () to enclose search terms, or quotation marks (" ") to search for an exact phrase. For example: “Everyday Life Information Seeking Behaviour” as it is “ ” the result become 42 only. - Wildcard It takes places search for one or more characteristics in search terms. ? mark is used for single character searching. * is used for multiple character searching. For example: Search te?ts: results texts, tests, tents. Words are starts with te- and ends with -ts. Search world*: result worlds, worldwide, world webs. Words stars with world-. Image search In JSTOR image can also searched short by: Relevance, Newest, oldest, title, creator in increase or decrease order, resolutions. Refined results by content type primary sources, image resolution, by dates, classification. Image search also provides advance searches like All Content. source: https://www.jstor.org/screenshot is taken for education purpose only JSTOR provides browse books or journals by subject, title, publishers, collections, images. Conclusion JSTOR is useful for academic research by offering extensive access to an enormous online library filled with scholarly materials. It has greatly reduced the demands on physical library space, enabled worldwide access to important academic resources, and encouraged progress through different programs and technological improvements. JSTOR remains essential in safeguarding academic work and aiding the academic community, making certain that knowledge is available to everyone.
oercommons
2025-03-18T00:35:08.974608
12/24/2024
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/123354/overview", "title": "JSTOR", "author": "Arpita Sarkar" }
https://oercommons.org/courseware/lesson/89681/overview
Лекція Overview Ресурс Розділ 1 Тема 2. Економічні потреби і виробничі можливості суспільства 2.1. Економічні потреби суспільства, їх сутність і класифікація. Безмежність потреб. 2.2. Сутність суспільного виробництва. Основні фактор суспільного виробництва, їх взаємодія. 2.3. Вирішення проблеми безмежних потреб та обмежених економічних ресурсів. 2.4. Ефективність суспільного виробництва. 2.1. Економічні потреби суспільства, їх сутність і класифікація. Безмежність потреб. Кінцевою мстою виробництва є задоволення різноманітних потреб людини як особистості, споживача і виробника. Її самому загальному вигляді потреба представляє бажання людини мати певне благо, яке забезпечує їй життєдіяльність і поліпшує її. Економічні потреби це ставлення людей до економічних умов їх життєдіяльності, які їм дають задоволення, насолоду або втіху і спонукають їх до діяльності, до того, щоб мати і володіти такими умовами. Крім економічних потреб існують культурні, політичні, ідеологічні, національні та ін. потреби. Потреби мають об'єктивно-суб'єктивний характер. їх можна класифікувати, в першу чергу, за суб'єктами і об'єктами. За суб'єктами потреби поділяються на: — індивідуальні, колективні та суспільні; — потреби домогосподарств, підприємств і держави; — потреби суспільно-економічних класів і соціальних груп. За об 'єктами потреби поділяються так: — породжені існуванням людини як біологічної істоти; — матеріальні і духовні потреби; — першочергові і другорядні потреби. За способом задоволення економічні потреби поділяються на потреби в предметах споживання і потреби в засобах виробництва. Перші характеризують особисті, індивідуальні потреби, а другі - виробничі. Між цими потребами існують суперечності, які суспільство повинно розв'язувати таким чином, щоб забезпечити розвиток виробництва І задоволення зростаючих особистих і виробничих потреб. Одним із фундаментальних положень політичної економії є слідуюче: особисті потреби людини є безмежними, а способи і методи їх задоволення - обмеженими, тобто, виробничі ресурси для задоволення людських потреб - обмежені. Безмежність потреб і обмеженість ресурсів породжують дію двох законів економічного життя - закону зростання потреб і закону розвитку факторів виробництва. Ці закони взаємопов'язані і характеризують дві сторони соціально-економічного прогресу: 1) неухильний розвиток людини з ЇЇ зростаючими потребами; 2) підвищення ефективності виробничих ресурсів за послідовного нарощування обсягу відтворюваних ресурсів та їх якісних показників (продуктивності, корисності тощо). Виробничі фактори безперервно розвиваючись, не лише створюють умови для задоволення потреб, які склалися, але й стають підґрунтям для виникнення нових потреб. За ступенем реалізації потреби можна класифікувати як дійсні, платоспроможні і перспективні. Дійсні потреби в основному відповідають рівню розвитку виробництва відповідних благ і можуть бути реально задоволеними. Платоспроможні потреби пов'язані з власними доходами людини і рівнем цін на товари та послуги. Перспективні потреби -- це такі потреби, які породжені сучасним рівнем розвитку економіки І виробництво яких лише починає освоюватися. Потреби людей не є постійними, вони - продукт розвитку суспільства. їх характер, структура і способи задоволення залежить від досягнутого рівня продуктивних сил, ступеня розвитку культури і науки, соціально-економічного ладу. В кожному суспільстві діє закон зростання потреб, суть якого полягає в тому, що в міру розвитку суспільного виробництва, а також самої людини як продуктивної сили, відбувається поступове зростання її потреб. Цей закон виражає слідуючу закономірність: зрослі потреби стимулюють ріст виробництва, яке. в свою чергу, веде до подальшого зростання потреб. Тобто це є безперервний процес, поскільки людські потреби є безмежними. Закон зростання потреб знаходить свій прояв у двох головних формах: у зростанні особистих потреб і зростанні виробничих потреб. Особисті потреби задовольняються шляхом індивідуального споживання предметів і послуг кожним членом суспільства окремо (їжа, одяг, житло і т. ін.), або в рамках сім'ї (предмети культурно-побутового призначення), а інші - шляхом спільного споживання (послуги освіти, охорони здоров'я і т. ін.) Другою формою прояву даного закону є зростання виробничих потреб, тобто, потреб в засобах виробництва, які є матеріальною основою розширення виробництва для задоволення особистих потреб. З точки зору дії закону зростання потреб відрізняють традиційні і нові потреби. Під традиційними розуміють такі, які є звичними для людей і вони задовольняються виробництвом відповідного продукту (потреби в хлібі, м'ясі, молоці, телевізорах, холодильниках, магнітофонах тощо). Під новими потребами розуміють такі, які ще не стали звичними для більшості людей І усвідомлюються ними як такі, що будуть задоволені в майбутньому (нові моделі комп'ютерів, літаків, відеотехніки тощо). Відповідно до закону зростання потреб відбувається якісне і постійне їх збільшення, як по вертикалі, так І по горизонталі: потреби вищого порядку виникають у все більшої кількості людей, по мірі розвитку виробництва людина прагне якраз до задоволення таких потреб Особливістю дії даного закону є те, що він має незворотній характер, тобто потреби, в будь-якій ситуації змінюються в бік їх збільшення. 2.2. Сутність суспільного виробництва. Основні фактор суспільного виробництва, їх взаємодія. Процес виробництва - це діяльність людей по безпосередньому створенню матеріальних благ, необхідних для задоволення їх потреб. Всяке виробництво представляє собою суспільний і безперервний процес, поскільки постійно повторюючись, воно історично розвивається від найпростіших форм (добування первісною людиною їжі з допомогою примітивних засобів) до сучасного рівня автоматизації та високої продуктивності. Створення матеріальних благ являє собою вирішальну сферу діяльності людей. Тому, відносини, що складаються в процесі виробництва, визначають характер відносин розподілу, обміну і споживання, зумовлюють економічний лад суспільства в цілому. З цієї сторони, вияснення суті процесу виробництва, його особливостей має першочергове значення для розуміння всієї системи економічних відносин. Всякий процес виробництва є перш за все процес праці, тобто процес, що здійснюється між людиною і природою І в якому людина видозмінює природу стосовно задоволення своїх потреб. Незаперечною умовою процесу виробництва в любому суспільстві є наявність речових (об'єктивних) та особистих (суб'єктивних) факторів виробництва. Речові фактори охоплюють засоби виробництва, що складаються із засобів праці і предметів праці . По своїй натуральній формі речові фактори представляють сукупність різнорідних споживчих вартостей, що мають різні властивості - вагу, форму, потужність і т.д. В цьому своєму проявленні вони не є економічною категорією і являються предметом вивчення природничих, технічних і технологічних наук. Економічна теорія розглядає засоби виробництва з двох сторін: по-перше, як необхідну умову створення матеріальних благ; по-друге, як об'єкт власності. Ці обидві сторони визначають характер і спосіб використання засобів виробництва, допомагають відтворити суспільні відносини в процесі взаємодії останніх і тих хто на них працює. Таким чином, речові фактори (засоби виробництва) є матеріальною основою певних суспільних відносин, і такий їх аналіз позволяє вияснити можливості їх розвитку, ефективність, соціальні наслідки їх використання в умовах того чи іншого суспільного ладу. Другим основним фактором виробництва є особистий (суб'єктивний) фактор, тобто працівник, з його здатністю до праці. Здатність до праці, або сукупність фізичних і духовних здібностей, якими володіє організм, жива особа людина, і які пускаються нею в хід всякий раз, коли вона виготовляє які-небудь споживні вартості називається робочою силою. В цьому розумінні робоча сила є загальною категорією всіх способів виробництва. Суть і характерні риси процесу виробництва розкриваються через механізм поєднання речових та особистих факторів виробництва. Характер з'єднання речових і особистих факторів виробництва визначається відносинами людей до засобів виробництва. В суспільстві, де панує товарна форма організації виробництва існує найм робочої сили, який відображає товарну форму останньої. В процесі взаємодії основних факторів виробництва необхідна підтримка і безперервне відновлення здатності людей до праці, кількісний і якісний ріст робочої сили. Матеріальною основою відтворення робочої сили є фонд життєвих засобів, тобто сукупність матеріальних і духовних життєвих засобів, необхідних для нормального відтворення робочої сили. Сюди включається відновлення працездатності самого робітника, утримання сім'ї, освіта, підвищення кваліфікації, соціально-культурний розвиток та інші потреби. В сучасних умовах появилися нові завдання і проблеми пов'язані з відтворенням робочої сили. Справа в тому, що рівень зайнятості в абсолютній більшості економічних районів країни наближається до відносної межі. Тому можливості залучення додаткових трудових ресурсів, основним чином пов'язаних з природнім приростом населення, за останні роки суттєво зменшилося в зв'язку з цим особливого значення набувають проблеми кращого використання робочого часу, зміцнення дисципліни праці, ліквідації плинності кадрів, збалансованості приросту робочих місць і чисельності трудових ресурсів і т.д. Тобто, мова йде про забезпечення переходу до трудозберігаючої форми розвитку виробництва. Сьогодні дуже важливо для раціонального використання робочої сили перебудова особистих якостей працівника таких, як: підвищення почуття відповідальності, самостійність, свідома дисципліна, творча ініціатива, сучасний характер економічного мислення, тощо, Все це пов'язано з розвитком людського фактору, з створенням типу працівника, який відповідає вимогам сучасного виробництва. Таким чином раціональне і повне використання особистого фактору (робочої сили) не є самоціллю, а засобом дальшого розвитку і підвищення ефективності суспільного виробництва. В сучасних умовах виробництва, НТР викликала до життя новий фактор - інформацію. Сьогодні вона виступає умовою роботи сучасної системи машин, підвищення якості і кваліфікації робочої сили, а також необхідною передумовою успішної організації самого процесу виробництва. НТР зумовила швидкий розвиток сфери обслуговування, яка не створює самостійного продукту, проте виконує важливі суспільні функції. До цієї сфери відносяться виробнича (транспорт, зв'язок, енергетичне та інформаційне обслуговування) і соціальна (освіта, охорона здоров'я, громадське харчування, житлово-комунальне господарство) інфраструктура. 2.3. Вирішення проблеми безмежних потреб та обмежених економічних ресурсів. Незалежно від класифікаційного визначення всі фактори виробництва використовують для виготовлення економічних благ. Припустимо, що за дуже спрощеного виробничого процесу один фактор використовують для виготовлення якогось одного матеріального блага. Це можна зобразити у вигляді формули: Q = F(А), де Q — економічне благо; А — фактор виробництва; F — функція. У даному разі економічне благо є результатом одного фактора. У реальній дійсності процес виробництва відбувається значно складніше і в ньому використовують, як правило, не один, а багато факторів (рис. 2). Якщо процес виробництва перебуває на лінії АА, фактор виробництва використовується оптимально і постійно відтворюється; якщо нижче цієї лінії, фактор виробництва використовується частково; якщо вище лінії АА, цей фактор використовується надмірно. У двох останніх випадках порушується рівновага виробничого процесу, що призведе або до дефіциту фактора, або до потреби в його додатковій кількості. Таким чином, найефективніше використання фактора виробництва є умовою подальшого збільшення масштабів виробництва, умовою розширеного виробництва того чи іншого продукту. Оскільки процес виробництва має витрати і результати, виникає питання про виробничу функцію. Річ у тім, що теорія факторів виробництва спирається певною мірою на використання математичного, модельного апарату, яким виступають факторні моделі у вигляді математичної залежності, пов'язуючи величину одержаного результату виробництва з використанням виробничих факторів, що обумовили цей результат. Виробнича функція — це технічне співвідношення між кількістю ресурсів, що використовуються виробниками, і обсягом виробленої на цій основі продукції. Виробничу функцію може бути використано як на макроекономічному рівні, де вона відображає залежність сукупного обсягу виробництва у грошовому виразі, так і на мікроекономічному рівні. На мікроекономічному рівні кожна фірма має свою, відмінну від інших суб'єктів господарювання виробничу функцію. У той же час виробнича функція може бути застосована до окремих галузей, видів виробництва і навіть до виробництва окремого підрозділу підприємства. Як правило, виробнича функція має теоретичне значення, але не позбавлена й практичного застосування. її широко використовують економісти для оцінки окремих ресурсів, що забезпечують економічне зростання. Першим варіантом у цьому плані була так звана виробнича функція Кобба — Дугласа, змістом якої є аналіз залежності обсягу виробництва від використання двох основних ресурсів — капіталу і праці. Подальший розвиток теорії виробничої функції відбувався в напрямі аналізу такого фактора, як час. Аналіз використання цього фактора означав процес переходу від статистичних оцінок моделі виробничої функції Кобба — Дугласа до динамічної оцінки з урахуванням впливу технічного прогресу на обсяг виробленої продукції, у подальшому найбільші досягнення в дослідженні функції належать американським економістам Р. Солоу та Е. Денісону. Р. Солоу розрахував показник, що характеризує матеріальність технічного прогресу і відображає ефективність нових інвестицій у зв'язку зі значними технічними й технологічними змінами у виробничому процесі. Е. Денісон дослідив показник не матеріалізованого технічного прогресу, що відображає якісні зміни в економіці як наслідки не інвестованих витрат. Розвиток технічного прогресу відповідно до цієї концепції можливий за рахунок підвищення рівня освіти, кваліфікації персоналу, кращої організації праці та ін Отже, виробнича функція свідчить, що існує багато варіантів виробництва певного обсягу продукції за рахунок певного набору факторів виробництва. Поліпшення технологічних параметрів, що максимально збільшують обсяг виробництва певного виду продукції, завжди відображається у новій виробничій функції. Виробничу функцію можна застосовувати для обчислення мінімальної кількості витрат, необхідних для виробництва будь-якого обсягу продукції. Співвідношення набору факторів виробництва і максимально можливого обсягу продукції, виробленої внаслідок цього набору факторів, і розкриває сутність виробничої функції. 2.4. Ефективність суспільного виробництва. Всі види економічних ресурсів, які людство використовує як фактори виробництва, обмежені у кількісному й якісному відношеннях. їх недостатньо для задоволення всіх людських потреб. Тому суспільство намагається використовувати рідкісні ресурси ефективно, що можливо лише за таких умов функціонування економічної системи, коли головна, об'єктивно зумовлена мета суспільного виробництва досягається з найменшими витратами природних, трудових та матеріальних ресурсів відповідно до вимог дії закону рідкості. Пізнання й використання вимог цього закону потребує подолання суперечності між задоволенням потреб суспільства та обмеженістю ресурсів, розробки шляхів, орієнтованих на розвиток та ефективніше використання факторів виробництва. Основними складовими елементами ефективності використання факторів у виробничому процесі с продуктивність праці, якість продукції, її матеріало- і фондомісткість, норма рентабельності. У загальному вигляді проблему ефективності функціонування факторів виробництва можна зобразити таким чином: Динамічний розвиток факторів виробництва безпосередньо пов'язаний з технічним прогресом, що зумовлює якісні зміни у знаряддях праці, техніці, технології, рівні кваліфікації робочої сили, а це і є продуктивні сили суспільства. Особливого значення набуває проблема освоєння досягнень науково-технічного прогресу з метою забезпечення економічного і соціального зростання добробуту людей. Це основний фактор динамічного зростання суспільного продукту (валового, кінцевого, чистого). Співвідношення між темпами зростання суспільного продукту і зміною технічної побудови виробництва залежить від того, який тип екологічного зростання має місце в економічній структурі суспільства — екстенсивний чи інтенсивний. Якщо перший з них пов'язаний з кількісними збільшеннями використання факторів виробництва при незмінній технічній основі, то другий характеризується збільшенням продукції на основі якісного вдосконалення факторів виробництва. Відставання процесу виробництва від рівня науково-технічного прогресу призводить до неефективного використання факторів виробництва, до нераціональних витрат матеріальних, природних і трудових ресурсів, знижує у цілому ефективність виробничого процесу. При цьому посилюється функціональний вплив технічної побудови виробництва на рівень трудомісткості, матеріаломісткості та фондомісткості. ЩОБ ЗАКРІПИТИ ЗНАННЯ - ВИКОНАЙТЕ ТЕСТИ! Сукупні потреби суспільства - мета економічної діяльності "Для того щоб суспільні потреби могли бути задоволені, потрібно виробляти кожного товару саме стільки, скільки його суспільство потребує". (М. Туган-Барановський) Економічні інтереси "Економічні відносини кожного даного суспільства виявляються перш за все як інтереси". (К. Маркс) Запитання для самоконтролю 1. Поясніть що таке потреба. 2. Характер походження потреб. 3. Що таке первинні потреби, яке значення вони мають у житті людини? 4. Сформулюйте сутність закону потреб. 5. Для чого потрібні знання про тенденції і перспективи розвитку потреб та їхні структурні зміни? 6. Ієрархія потреб за Маслоу. 7. Що таке інтерес? 8. Як виникає інтерес? На чому він ґрунтується? 9. Яка відмінність між економічною потребою та інтересом? 10. Охарактеризуйте суб'єкти і об'єкти економічних інтересів. 11. Як узгоджуються економічні інтереси в суспільстві? 12. Розкрийте роль потреб та інтересів як джерела активності економічних суб'єктів.
oercommons
2025-03-18T00:35:09.013951
01/31/2022
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/89681/overview", "title": "Лекція", "author": "Ольга Пальоха" }
https://oercommons.org/courseware/lesson/99510/overview
Education and Peace Overview We chose these SDGs because peace and education are one of the most fundamental aspects in a society for it to properly function. Without education on a difficult matter, it will be hard to ensure peace if we don’t know the root, causes, or solutions to an issue.
oercommons
2025-03-18T00:35:09.053034
12/17/2022
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/99510/overview", "title": "Education and Peace", "author": "Christelle Pericles" }
https://oercommons.org/courseware/lesson/103639/overview
Example for Rolling Motion Overview This is an example to discuss the rolling motion during the lecture to have an active learning class. Rolling Motion This is an example to discuss the rolling motion during the lectures.
oercommons
2025-03-18T00:35:09.070054
05/08/2023
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/103639/overview", "title": "Example for Rolling Motion", "author": "Farid Mahboubi Nasrekani" }
https://oercommons.org/courseware/lesson/74705/overview
Creative Commons License Quiz Overview The following link will take you to the Creative Commons License Quiz: https://forms.gle/3PEZ9syDovgeJgvaA The information in this quiz has been adapted from the "Permissions Guide by Educators," and Creative Commons Licenses by Sagender Singh Parmar. This quiz was made by Aubree Evans for Branch Alliance for Educator Diversity. Creative Commons License Quiz The following link will take you to the Creative Commons License Quiz: https://forms.gle/3PEZ9syDovgeJgvaA The information in this quiz has been adapted from the "Permissions Guide by Educators," and Creative Commons Licenses by Sagender Singh Parmar. This quiz was made by Aubree Evans for Branch Alliance for Educator Diversity.
oercommons
2025-03-18T00:35:09.082861
11/13/2020
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/74705/overview", "title": "Creative Commons License Quiz", "author": "Aubree Evans" }
https://oercommons.org/courseware/lesson/103734/overview
IHE Accessibility in OER Implementation Guide Overview In this section, you and your team will engage in a Landscape Analysis to uncover key structures and supports that can guide your work to support Accessibility in OER. You may or may not answer all of these questions, but this is an offering. May 11 - Section One: Landscape Analysis for Accessibility in OER in Local Context (Work on during May 11th implementation) In this section, you and your team will engage in a Landscape Analysis to uncover key structures and supports that can guide your work to support Accessibility in OER. We exnourage to explore some of the questions from each category. You may or may not answer all of these questions, but this is an offering. We ask that you complete Parts One, Two and Six. Part One: Initial Thoughts What is your team's initial goal for this series? Our initial goal is to gain a sense of where we are lacking in the area of creating and accessing accessible documents/websites and build knowledge and create action step/s. Part Two: Introductory probing questions: What does accessibility look like in our organization? How do we measure accessibility? We have talked about UDL overall but really have done less with diving into it- including the accessibility component. We have had the Office of Disability Services present at College of Education meetings- but it is brief and limited follow up. Therefore, we want to find ways to be intentional about incorporating this work into our daily practice. What does OER look like in our organization? How do we measure access to OER? Not sure this is currently happening and will become part of our work moving forward. Part Three: Clarifying questions for accessibility: What is the organizational structure that supports accessibility? Outside of the Office of Disability Services we really are not familiar with the organizational supports of the university as a whole. We feel that it is important for out department especially to become more knowledgable and active in this area. Who generates most of the accessibility structures/conversation in our organization? Office of Disability Services Where do most educators get support with accessibility? Office of Disability Services What content areas might have the largest gaps in access to accessibility? Perhaps syllabi (including required inserts provided by other departments- we will check on this) and presentations (e.g., ppts) and perhaps our LMS. Part Four: Clarifying questions for OER: What is our organizational structure that supports curricular resources? What is our organizational structure that supports OER? Who generates most of the curricular resources in our organization? Where do most educators get support with curricular resources? What content areas might have the largest gaps in access to curricular resources/OER? Part Five: Clarifying questions for Faculty learning and engagement: What Professional Learning (PL) structures have the best participation rates for our educators? We talked about building this work into already scheduled department meetings and providing chunks of SLIDE across the SY. What PL structures have the best "production" rates for our educators? What incentive do we have to offer people for participating in learning and engagement? Who are the educators that would be most creative with accessibility and OER? Who are the educators that would benefit the most from accessibility and OER? Part Six: Final Probing questions: What is our current goal for Accessibility in OER and why is that our goal? Who have we not yet included while thinking about this work? What barriers remain when considering this work? What would genuine change look like for our organization for this work? Section Two: Team Focus (Finish before May 25th to share during Implementation Session Two) Identifying and Describing a Problem of Practice The following questions should help your team ensure that you are focusing your collaboration. What is your Team’s specific goal for this series? You may consider using AEM Quality Indicators for Creating Accessible Materials to help add to or narrow your work. What other partners might support this work? We will be more intentional about working with and following up with Office of Disability Services following info sessions. What is your desired timeframe for this work? 2023-24 SY with beginning at first department meetings. Shawnee will speak with SPED Department Chair and Ann will speak with OSCP Assistant Dean. How will you include diverse voices and experiences in this work? Perhaps Office of Disability Services can help us find a student who would self-identify and support us in this work in order to capture the S voice. Please create a Focus Question that explains your goal and provides specific topics that you would like feedback on. This is what you will share in your breakout groups for feedback. (Save for during May 25th's session.) What feedback did you receive from another team during the May 25th Implementation Session? The team reviewing our proposal liked that we had managable chunks that were intentionally building and looping back. They talked about the Guidance/Policy document that their organization had- and that prompted us to think more about what policies were already developed within our university. Section Three: Team Work Time and Next Steps (Complete by the end of Implementation Session Three) What was your redefined goal for this series? By April, 2024, the Special Education and Office of School and Community Partnerships department faculty will use components of SLIDE when developing content for at least one of their course presentations with 90% participation by all faculty. We will ask Department Chair to add the materials to the department canvas page. Implementation Goal: Provide time to support faculty to improve accessibility of presentations (e.g., PPTs) of information within lecture- font, color contrast, image with alt text; committed time on Special Education Dept month meeting (M1- font, M2- color) 10 min- end of the year invite Joanna (ISKME) to check in our SLIDE Possible timeline: 2023-24 Academic Year Possible support groups: Joanna ; Office of Disability Services Voices that are not yet included: ODS may have a S they can recommend to provide voic Feedback Question: MCC felt that our content was digestible and liked that it as cumulative 2. What does your team want to celebrate? Coming together to learn, reflect, and develop action step to address this important area. 3. What did your team accomplish? If you have links to resources, please include them here. Current PPTs as examples (pre/post SLIDE); AEM resources 4. What are your team’s next steps? Discuss proposal with SPED Department Chair and OSCP Assistant Dean; find time to discuss specific next steps (plan for the 2023-24 SY)
oercommons
2025-03-18T00:35:09.112502
Shawnee Wakeman
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/103734/overview", "title": "IHE Accessibility in OER Implementation Guide", "author": "Ann Jolly" }
https://oercommons.org/courseware/lesson/120754/overview
LaTeX: A Vital Tool for Research and Academia Overview LaTeX is a standard tool for writing research articles, research papers, dissertations, and reports. It is widely used in fields such as computer science, mathematics, engineering, chemistry, and physics. A Brief Introduction to LaTeX Tool. In research writing, both clarity and professionalism are essential. LaTeX, short for "Lamport TeX," is a robust system for preparing documents, particularly well-suited for scientific and technical writing. It is widely recognized for its ability to produce highquality typeset documents. LaTeX is highly regarded as the optimal solution for experts in mathematics, science, and engineering. Its impressive ability to process intricate mathematical formulas, symbols, and notations is noteworthy. Additionally, it offers the flexibility to create custom templates for various document formats, including thesis, research papers, and presentations. In LaTeX, one can structure their research documents with sections, subsections, and appendices, and it automatically creates tables of contents and lists of figures. A .bib file is designed for storing bibliographic references in a particular format, which facilitates the independent management of citations. LaTeX can be implemented using tools like MiKTeX, TeXworks, LaTeX Editor, Overleaf, and many more. LaTeX is an open-source program that anyone can use for free and benefits with considerable support from its community. LaTeX files are easily accessible and less likely to get corrupted. LaTeX is a vital resource for researchers and academics, offering a flexible and adaptable platform for preparing documents. Online LaTex Editors: Visit website - https://www.overleaf.com Offline LaTex editor and Compiler Links:
oercommons
2025-03-18T00:35:09.131535
10/16/2024
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/120754/overview", "title": "LaTeX: A Vital Tool for Research and Academia", "author": "S Ratna Manjari" }
https://oercommons.org/courseware/lesson/116972/overview
FAD Teaching Resource: Quiz Overview Teaching resource shared by a UNC System faculty member. Sample Quiz History 1160 3 October 2017 Quiz #4 What were the principle differences between James Madison’s arguments and Mercy Otis Warren’s criticism? Be sure to reference specific examples from the documents. Note—This was an in-class pop quiz that students had 15 minutes to complete. Prior to class, they read the Constitution, Madison’s Federalist No. 10, and Mercy Otis Warren’s critique of the Constitution, with the latter two documents located in Paul Johnson’s book Reading the American Past.
oercommons
2025-03-18T00:35:09.146406
06/18/2024
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/116972/overview", "title": "FAD Teaching Resource: Quiz", "author": "UNC System" }
https://oercommons.org/courseware/lesson/60461/overview
Chapter 2.4: Constitution of 1845 Overview Constitution of 1845 Constitution of 1845 The Constitution of 1845, which provided for the government of Texas as a state in the United States, was almost twice as long as the Constitution of the Republic of Texas. The framers, members of the Convention of 1845, drew heavily on the newly adopted Constitution of Louisiana and on the constitution drawn by the Convention of 1833 , but apparently used as a working model the Constitution of the republic for a general plan of government and bill of rights. The legislative department was composed of a Senate of from nineteen to thirty-three members and a House of Representatives of from forty-five to ninety. Representatives, elected for two years, were required to have attained the age of twenty-one. Senators were elected for four years, one-half chosen biennially, all at least thirty years old. Legislators’ compensation was set at three dollars a day for each day of attendance and three dollars for each twenty-five miles of travel to and from the capital. All bills for raising revenue had to originate in the House of Representatives. Austin was made the capital until 1850, after which the people were to choose a permanent seat of government. A census was ordered for each eighth year, following which adjustment of the legislative membership was to be made. Regular sessions were biennial. Ministers of the Gospel were ineligible to be legislators. The governor’s term was two years, and he was made ineligible for more than four years in any period of six years. He was required to be a citizen and a resident of Texas for at least three years before his election and to be at least thirty years of age. He could appoint the attorney general, secretary of state, and supreme and district court judges, subject to confirmation by the Senate; but the comptroller and treasurer were elected biennially by a joint session of the legislature. The governor could convene the legislature and adjourn it in case of disagreement between the two houses and was commander-in-chief of the militia. He could grant pardons and reprieves. His veto could be overruled by two-thirds of both houses. The judiciary consisted of a Supreme Court, district courts, and such inferior courts as the legislature might establish, the judges of the higher courts being appointed by the governor for six-year terms. The Supreme Court was made up of three judges, any two of whom constituted a quorum. Supreme and district judges could be removed by the governor on address of two-thirds of both houses of the legislature for any cause that was not sufficient ground for impeachment. A district attorney for each district was elected by joint vote of both houses, to serve for two years. County officers were elected for two years by popular vote. The sheriff was not eligible to serve more than four years of any six. Trial by jury was extended to cases in equity as well as in civil and criminal law. The longest article of the constitution was Article VII, on General Provisions. Most of its thirty-seven sections were limitations on the legislature. One section forbade the holding of office by any citizen who had ever participated in a duel. Bank corporations were prohibited, and the legislature was forbidden to authorize individuals to issue bills, checks, promissory notes, or other paper to circulate as money. The state debt was limited to $100,000, except in case of war, insurrection, or invasion. Equal and uniform taxation was required; income and occupation taxes might be levied; each family was to be allowed an exemption of $250 on household goods. A noteworthy section made exempt from forced sale any family homestead, not to exceed 200 acres of land or city property not exceeding $2,000 in value; the owner, if a married man, could not sell or trade the homestead except with the consent of his wife. Section XIX recognized the separate ownership by married women of all real and personal property owned before marriage or acquired afterwards by gift or inheritance. Texas was a pioneer state in providing for homestead protection and for recognition of community property. In the article on education the legislature was directed to make suitable provision for support and maintenance of public schools, and 10 percent of the revenue from taxation was set aside as a Permanent School Fund. School lands were not to be sold for twenty years but could be leased, the income from the leases becoming a part of the Available School Fund. Land provisions of the Constitution of 1836 were reaffirmed, and the General Land Office was continued in operation. By a two-thirds vote of each house an amendment to the constitution could be proposed. If a majority of the voters approved the amendment and two-thirds of both houses of the next legislature ratified it, the measure became a part of the constitution. Only one amendment was ever made to the Constitution of 1845. It was approved on January 16, 1850, and provided for the election of state officials formerly appointed by the governor or by the legislature. The Constitution of 1845 has been the most popular of all Texas constitutions. Its straightforward, simple form prompted many national politicians, including Daniel Webster, to remark that the Texas constitution was the best of all of the state constitutions. Though some men, including Webster, argued against the annexation of Texas, the constitution was accepted by the United States on December 29, 1845. Reading Review Questions - Why was the constitution of 1845 written? - What does biennial mean and what does the establishment of a biennial legislature indicate Texans desired in their legislative branch? - What powers did the state governor have over the state courts? - What was a prominent feature of Article VII of the Texas constitution of 1845? - What policy area was well provided for by the new constitution? - Why is the 1845 constitution considered one of the most popular in Texas history? For More Information For More Informationhttps://tarltonapps.law.utexas.edu/constitutions/texas1845 More information on the Constitution of Texas (1845) may be found at the Texas Constitutions 1824-1876 project of the Tarlton Law Library, Jamail Center for Legal Research at the University of Texas School of Law, The University of Texas at Austin. The project includes digitized images and searchable text versions of the constitutions.
oercommons
2025-03-18T00:35:09.164232
Annette Howard
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/60461/overview", "title": "Texas Government 1.0, Texas' Constitution, Chapter 2.4: Constitution of 1845", "author": "Reading" }
https://oercommons.org/courseware/lesson/60442/overview
Chapter 1.5: Governor E.J. Davis Overview Governor E.J. Davis Learning Objectives By the end of this section, you will be able to: - Understand the role and importance of Governor E.J. Davis in Texas' history Introduction Introduction Edmund Jackson Davis (October 2, 1827 – February 24, 1883) was an American lawyer, soldier, and politician. He was a Southern Unionist and a general in the Union Army in the American Civil War. He also served for one term from 1870 to 1874 as the 14th Governor of Texas. Civil War Years Civil War Years In early 1861, Edmund Davis supported Governor Sam Houston in their mutual stand against secession. Davis also urged Robert E. Lee not to violate his oath of allegiance to the United States. Davis ran to become a delegate to the Secession Convention but was defeated. He thereafter refused to take an oath of allegiance to the Confederate States of America[1] and was removed from his judgeship. He fled from Texas and took refuge in Union-occupied New Orleans, Louisiana. He next sailed to Washington, D.C., where President Abraham Lincoln issued him a colonel’s commission with the authority to recruit the 1st Texas Cavalry Regiment (Union).[2] Davis recruited his regiment from Union men who had fled from Texas to Louisiana. The regiment would see considerable action during the remainder of the war. On November 10, 1864, President Lincoln appointed Davis as a brigadier general of volunteers. Lincoln did not submit Davis’s nomination to this grade to the U.S. Senate until December 12, 1864.[3] The U.S. Senate confirmed the appointment on February 14, 1865.[4] Davis was among those present when General Edmund Kirby Smith surrendered the Confederate forces in Texas on June 2, 1865.[5] Davis was mustered out of the volunteers on August 24, 1865.[6] Post War Post War Following the end of the war, Davis became a member of the 1866 Texas Constitutional Convention. He supported the rights of freed slaves and urged the division of Texas into several Republican-controlled states. In 1869, he was narrowly elected governor against Andrew Jackson Hamilton, a Unionist Democrat. As a Radical Republican during Reconstruction, his term in office was controversial. On July 22, 1870, the Texas State Police came into being to combat crime statewide in Texas. It worked against racially-based crimes, and included black police officers, which caused protest from former slaveowners (and future segregationists). Davis created the “State Guard of Texas” and the “Reserve Militia,” which were forerunners of the Texas National Guard.[7] Davis’ government was marked by a commitment to the civil rights of African Americans. One of his protégés was Norris Wright Cuney of Galveston, who continued the struggle for equality until his own death in 1896 and is honored as one of the important figures in Texas and American black history. Though Davis was highly unpopular among former Confederates, and most material written about him for many years was unfavorable, he was considered to have been a hero for the Union Army. He also gained the respect and friendship of Spanish-speaking residents on the Rio Grande frontier.[8] In 1873, Davis was defeated for reelection by Democrat Richard Coke (42,633 votes to 85,549 votes) in an election marked by irregularities. Davis contested the results and refused to leave his office on the ground floor of the Capitol. Democratic lawmakers and Governor-elect Coke reportedly had to climb ladders to the Capitol’s second story where the legislature convened. When President Grant refused to send troops to the defeated governor’s rescue, Davis reluctantly left the capital in January 1874. He locked the door to the governor’s office and took the key, forcing Coke’s supporters to break in with an axe.[9] John Henninger Reagan helped to oust him after he tried to stay in office beyond the end of his term. Davis was the last Republican governor of Texas until Republican Bill Clements defeated the Democrat John Luke Hill in 1978 and assumed the governorship the following January, 105 years after Davis vacated the office. Following his defeat, Davis was nominated to be collector of customs at Galveston but declined the appointment because he disliked U.S. President Rutherford B. Hayes. He ran for governor again in 1880 but was soundly defeated. His name was placed in nomination for Vice President of the United States at the 1880 Republican National Convention, which met in Chicago and chose James A. Garfield as the standard-bearer. Had Davis succeeded, he might have wound up in the White House, as did Chester A. Arthur, the man who received the vice presidential nomination that year. Davis lost an election for the United States House of Representatives in 1882. After Democrats regained power in the state legislature, they passed laws making voter registration more difficult, such as requiring payment of poll taxes, which worked to disfranchise blacks, Mexican Americans and poor whites. They also instituted a white primary. In the 1890s, more than 100,000 blacks were voting but by 1906, only 5,000 managed to get through these barriers.[10] As Texas became essentially a one-party state, the white primary excluded minorities from the political competitive process. They did not fully recover their constitutional rights until after enforcement under the Voting Rights Act of 1965. Edmund J. Davis died in 1883 and was given a war hero’s burial at the Texas State Cemetery in Austin. A large gravestone was placed in Davis’ honor by a brother. Davis was survived by his wife, the former Anne Elizabeth Britton (whose father, Forbes Britton, had been chief of staff to Texas Governor Sam Houston), and two sons: Britton (a West Point graduate and military officer), and Waters (an attorney and merchant in El Paso).[11] Reading Review Questions - What was Edmund J. Davis’ stand on secession? What did he refuse to do when Texas seceded? - What role did Davis play in the Civil War? - When was Davis elected governor of Texas and for what political party? - What law enforcement units did Davis create while governor? - To what two groups of Texans was Davis considered a friend? - What two rebellious things did Davis do when he lost reelection in 1873? - Why can it be said that E.J. Davis was almost President of the United States? Notes Notes - Odie Arambula, "Young lawyer Davis had big local role," Laredo Morning Times, May 6, 2012, p. 17A ↵ - Texas State Handbook Online. Moneyhon, Carl H. (30 May 2010). "Davis, Edmund Jackson". Texas State Historical Association. Retrieved 29 September 2010. ↵ - Eicher, John H., and David J. Eicher, Civil War High Commands. Stanford: Stanford University Press, 2001. ISBN 0-8047-3641-3. p. 720 ↵ - Eicher, John H., and David J. Eicher, Civil War High Commands. Stanford: Stanford University Press, 2001. ISBN 0-8047-3641-3. p. 720 ↵ - Texas State Handbook Online. Moneyhon, Carl H. (30 May 2010). "Davis, Edmund Jackson". Texas State Historical Association. Retrieved 29 September 2010. ↵ - Eicher, John H., and David J. Eicher, Civil War High Commands. Stanford: Stanford University Press, 2001. ISBN 0-8047-3641-3. p. 720 ↵ - Texas State Handbook Online. Olsen, Bruce A. (30 May 2010). "Texas National Guard". Texas State Historical Association. Retrieved 29 September 2010. ↵ - Odie Arambula, Visiting the Past column, "Radical Republican Davis had support", Laredo Morning Times, 20 May 2012, p. 15A ↵ - Brown, Lyle C., Langenegger, Joyce A., Garcia, Sonia R., et al. PRACTICING TEXAS POLITICS, Thirteenth Edition. Boston: Houghton Mifflin, 2006. (Page 67-68) ↵ - African-American Pioneers of Texas: From the Old West to the New Frontiers (Teacher’s Manual) (PDF). Museum of Texas Tech University: Education Division. p. 25. Archived from the original (PDF) on 2007-02-05. ↵ - Texas State Handbook Online. Moneyhon, Carl H. (30 May 2010). "Davis, Edmund Jackson". Texas State Historical Association. Retrieved 29 September 2010. ↵
oercommons
2025-03-18T00:35:09.186781
Annette Howard
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/60442/overview", "title": "Texas Government 1.0, Texas History and Culture, Chapter 1.5: Governor E.J. Davis", "author": "Reading" }
https://oercommons.org/courseware/lesson/60451/overview
Chapter 2.2: Constitution of Coahuila And Texas (1827) Overview Constitution Of Coahuila And Texas (1827) Constitution Of Coahuila And Texas (1827) The Constitution of 1824 of the Republic of Mexico provided that each state in the republic should frame its own constitution. The state of Coahuila and the former Spanish province of Texas were combined as the state of Coahuila and Texas. The legislature for the new state was organized at Saltillo in August 1824, with the Baron de Bastrop representing Texas. More than two years was spent on the framing of a constitution, which was finally published on March 11, 1827. The constitution divided the state into three departments, of which Texas, as the District of Bexar, was one. The Catholic religion was made the state religion; citizens were guaranteed liberty, security, property, and equality; slavery was forbidden after promulgation of the constitution, and there could be no import of slaves after six months. Citizenship was defined and its forfeiture outlined. Legislative power was delegated to a unicameral legislature composed of twelve deputies elected by popular vote; Texas was allowed two of the twelve. The body, which met annually from January through April and could be called in special session, was given wide and diverse powers. In addition to legislative functions, it could elect state officials if no majority was shown in the regular voting, could serve as a grand jury in political and electoral matters, and could regulate the army and militia. It was instructed to promote education and protect the liberty of the press. Executive power was vested in a governor and vice governor, elected for four-year terms by popular vote. The governor could recommend legislation, grant pardons, lead the state militia, and see that the laws were obeyed. The vice governor presided over the council and served as police chief at the capital. The governor appointed for each department a chief of police, and an elaborate plan of local government was set up. Judicial authority was vested in state courts having charge of minor crimes and civil cases. The courts could try cases but could not interpret the law; misdemeanors were tried by the judge without a jury. Military men and ecclesiastics were subject to rules made by their own orders. Trial by jury, promised by the constitution, was never established, nor was the school system ever set up. The laws were published only in Spanish, which few Anglo-Texans could read. Because of widespread objections to government under this document, the Convention of 1833 proposed a new constitution to give Texas statehood separate from Coahuila. Reading Review Questions - When were the state of Coahuila and province of Texas combined into one state? - What was made the state religion of Coahuila and Texas? - What three freedoms were citizens guaranteed? - What was forbidden once the constitution went into effect? - How many branches of government did the constitution establish? - What two things, promised by the constitution, were never established? - Why did most Anglo-Texans not understand the laws? - What did dislike for the constitution prompt Texans to do? For More Information For More Information More information on the Constitution Of Coahuila And Texas (1827) may be found at the Texas Constitutions 1824-1876 project of the Tarlton Law Library, Jamail Center for Legal Research at the University of Texas School of Law, The University of Texas at Austin. The project includes digitized images and searchable text versions of the constitutions.
oercommons
2025-03-18T00:35:09.203806
Annette Howard
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/60451/overview", "title": "Texas Government 1.0, Texas' Constitution, Chapter 2.2: Constitution of Coahuila And Texas (1827)", "author": "Reading" }
https://oercommons.org/courseware/lesson/60443/overview
Chapter 1.6: The Texas Oil Boom Overview By the end of this section, students will be able to: - Understand the history of Texas' oil boom - Understand the importance of Texas' oil boom Learning Objectives By the end of this section, you will be able to: - Understand the history of Texas' oil boom - Understand the importance of Texas' oil boom The Gusher Age The Gusher Age The Texas oil boom, sometimes called the gusher age, was a period of dramatic change and economic growth in the U.S. state of Texas during the early 20th century that began with the discovery of a large petroleum reserve near Beaumont, Texas. The find was unprecedented in its size and ushered in an age of rapid regional development and industrialization that has few parallels in U.S. history. Texas quickly became one of the leading oil-producing states in the U.S., along with Oklahoma and California; soon the nation overtook the Russian Empire as the top producer of petroleum. By 1940 Texas had come to dominate U.S. production. Some historians even define the beginning of the world’s Oil Age as the beginning of this era in Texas.[1] The major petroleum strikes that began the rapid growth in petroleum exploration and speculation occurred in Southeast Texas, but soon reserves were found across Texas and wells were constructed in North Texas, East Texas, and the Permian Basin in West Texas. Although limited reserves of oil had been struck during the 19th century, the strike at Spindletop near Beaumont in 1901 gained national attention, spurring exploration and development that continued through the 1920s and beyond. Spindletop and the Joiner strike in East Texas, at the outset of the Great Depression, were the key strikes that launched this era of change in the state. The Importance of Oil to Texas' Development The Importance of Oil to Texas' Development This period had a transformative effect on Texas. At the turn of the century, the state was predominantly rural with no large cities.[2] By the end of World War II, the state was heavily industrialized, and the populations of Texas cities had broken into the top 20 nationally.[3] The city of Houston was among the greatest beneficiaries of the boom, and the Houston area became home to the largest concentration of refineries and petrochemical plants in the world.[4] The city grew from a small commercial center in 1900 to one of the largest cities in the United States during the decades following the era. This period, however, changed all of Texas’ commercial centers (and developed the Beaumont/Port Arthur area, where the boom began). H. Roy Cullen, H. L. Hunt, Sid W. Richardson, and Clint Murchison were the four most influential businessmen during this era. These men became among the wealthiest and most politically powerful in the state and the nation. Reading Review Questions - What happened at Spindletop in Beaumont in 1901? - How did the Spindletop discovery change Texas’ role in the U.S. economy? - Why was Texas not hit quite as hard as the rest of the country by the Great Depression? - How did the “gusher age” transform Texas by the end of World War I? Notes Notes - Olson, James Stuart (2001). Encyclopedia of the industrial revolution in America. Westport, CT: Greenwood Press. ISBN 978-0-313-30830-7. p.238. ↵ - "Population of the 100 Largest Urban Places: 1900". U.S. Census Bureau. Retrieved November 3, 2009. ↵ - "Population of the 100 Largest Urban Places: 1950". U.S. Census Bureau. Retrieved November 2, 2009. "Population of the 100 Largest Urban Places: 1940". U.S. Census Bureau. Retrieved November 2, 2009. ↵ - "Chapter Two: Galveston Bay" (PDF). Texas A&M University-Galveston: Galveston Bay Information Center (Galveston Bay Estuary Project). Archived from the original (PDF) on July 20, 2011. Retrieved September 8, 2009. ... it [Galveston Bay] is at the center of the state's petrochemical industry, with 30 percent of U.S. petroleum industry and nearly 50 percent of U.S. production of ethylene and propylene Occuring [sic] on its shores. Weisman (2008), p. 166, "The industrial megaplex that begins on the east side of Houston and continues uninterrupted to the Gulf of Mexico, 50 miles away, is the largest concentration of petroleum refineries, petrochemical companies, and storage structures on Earth." ↵
oercommons
2025-03-18T00:35:09.222955
12/06/2019
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/60443/overview", "title": "Texas Government 1.0, Texas History and Culture, Chapter 1.6: The Texas Oil Boom", "author": "Annette Howard" }
https://oercommons.org/courseware/lesson/88115/overview
The Chameleon in the Kremlin: Contemporary Russia under Putin Overview New Millennium, New President: the Rise of Vladimir Putin, 1999-2004 In the 1990s, economic collapses and defeatist attitudes plagued Russia. Many Russian banks had collapsed due to mismanagement, as well as the rapid transfer of communist to capitalist economic systems. Russia no longer stood as a strong leader in world affairs. Instead, Russian wealth was held in the hands of a few oligarchs, while most of the country suffered from plummeting standards of living. Russia’s president, Boris Yeltsin, had been democratically elected after the collapse of the Soviet Union in 1991. But by the late 1990s, Yeltsin was in his upper 60s and ill. He increasingly relied on aides to help him with speeches, public appearances, and decision-making. In August 1999, Yeltsin appointed his Chief of the Russian Security Council as his prime minister. The maneuver surprised many within and outside Russia. At forty-six years old, Vladimir Putin was relatively young, and largely unknown to both Russians and foreigners. Four months later, Boris Yeltsin resigned as president of Russia and named Putin his successor. The following spring, Putin transformed from an obscure security agent to the leader of the largest nation on earth when he was elected the second president of Russia. Learning Objectives - Analyze Vladimir Putin’s presidency in Russia. Key Terms / Key Concepts Alexei Navalny: opposition leader to Vladimir Putin Chechnya: republic in southern Russia that has long-sought complete independence Dmitry Medvedev: president of Russia (2008 – 2012) Georgia: small, independent nation in the Caucasus region of Europe oligarchs: Russian billionaire businessmen who gained extreme wealth and political influence during the late 1980s and 1990s Vladimir Putin: president of Russia (2000 – 2008; 2012 – Present) Putin's Early Career “Any cook should be able to run the country.” So said Vladimir Lenin about the egalitarian nature of the communist Soviet Union. In the Soviet Union, workers were the backbone of communism: in ideology and daily life. Therefore, any cook, any bricklayer, any shoemaker would represent the interests of and understand the people so well that they could theoretically govern the world’s largest communist state. Little did Vladmir Lenin know that his personal cook, Spiridon Putin, would one day have a grandson, named Vladimir, who would govern Russia. Background Vladimir Putin was born on October 7, 1952, in Saint Petersburg (then called Leningrad). In his autobiography, Putin outlined growing up in a modest, communal apartment with his working-class parents. He described himself, initially, as a poor student who was heavily interested in sports. Despite Soviet policies of the day that curbed religion, Putin also recalled that his mother had secretly baptized him as an Orthodox Christian. By his teenage years, Putin had transformed into a serious and inquisitive young man interested in Soviet politics. He excelled in the law program at Leningrad State University, and upon graduation, accepted a position as a security officer for the Russian foreign security agency: the KGB. Putin worked as a KGB agent for fifteen years. After retiring from the KGB, Putin returned to Saint Petersburg where he excelled in local politics. In the early 1990s, Putin moved to Moscow with his (then) wife and two young daughters to pursue his political career. Seemingly out of nowhere, quiet and private Vladimir Putin climbed the political ranks, serving as an advisor to Russia’s prime minister at the time: Anatoly Chubais. By 1998, Putin had developed such close connections to Boris Yeltsin’s inner circle, that Yeltsin named him director of the FSB—Russia’s intelligence agency and successor agency to the KGB. Within a year, Putin had again climbed the ranks and was appointed Secretary of the Security Council of Russia. In that capacity, Putin would regularly meet with President Yeltsin, and the heads of Russian defense. Then, in a surprising maneuver, Putin was named prime minister of Russia by Boris Yeltsin. Although Putin was unknown and enigmatic to most of Russia when he stepped into his new role, he would soon make his name known across the country. 1999:The Critical Year for Vladimir Putin In the fall of 1999, two key events occurred in Russia that launched Putin into the forefront of Russian attention. The first event occurred in August, just after Putin assumed his position as prime minister of Russia. An Islamist militant group invaded Dagestan—a Russian province in the southern Caucasus that is a mountainous region bordering the Caspian Sea. News stories in Russia proclaimed that the invading force committed atrocities against Russian soldiers. At home, Russians feared their country was weak and that it might be carved up as the former Soviet Union had been. A month later, a series of apartment bombings swept through three cities in Russia, including Moscow. Over 300 Russians were killed, with over 1,000 more injured. No perpetrators were concretely identified, and ever since the events, there has been much speculation about who carried out the bombings. Some analysts even speculate that the bombings were a false-flag operation. They suggest that the Russian FSB had planted the bombs with the intent of placing blame elsewhere and generating support for Yeltsin’s failing presidency. But despite these rumors and stories, most Russian eyes turned to a longstanding adversary: militants from Chechnya—a small republic in the Caucasus region of southern Russia, and a neighbor to Dagestan. It was the narrative the Russian government and media wanted Russians to buy. And to the average Russian, the story made sense. Russia had just fought a war against Chechnya under Yeltsin and left many people on both sides discontented. For although Chechnya operated independently, it remained a part of Russia. Chechens desperately sought complete independence. Russians, in contrast, sought more control over what they saw as a violent and unstable area. The competing ideas set Chechnya and Russia on a collision course with one another. Putin understood that Russians felt defeated in 1999. The bombings confirmed Russian fears that the world viewed their country as a place of instability and mass violence. Every stereotype and fear Russians sought to avoid rained down on them in 1999. The people desperately needed a hero. One who would give them hope for a brighter future, restore the glory of Russia, and crush their enemies. In the wake of apartment bombings, Putin stepped into the front of news cameras and overtly blamed the Chechens for the apartment bombings across Russia. Chechnya, he assured them, would pay for its crimes. He assured them that Russia would punish Chechnya and the forces that invaded Dagestan, that Russia would avenge the deaths of their soldiers, and that the country would persevere and reign triumphant. Confident, tough, but calm under pressure, Putin was exactly the leader Russians sought in their hour of crisis. Following the apartment bombings, Russia launched airstrikes on Chechnya, and then a land invasion of the northern half of Chechnya. Thousands of Chechens were killed, with thousands more displaced in the war that ensued. In December 1999, Yeltsin named Vladimir Putin the “acting president” of Russia as he resigned from the office. Facing an election that spring, Putin knew he had to demonstrate his strength as a leader. Under his order, the Russian military launched a massive campaign to capture the Chechen capital, Grozny. In February 2000, they succeeded. Although the war against Chechnya would continue for nearly a decade, Putin’s popularity exploded across Russia. Across the country, Russians turned out to vote for the next president. Unsurprisingly, and overwhelmingly, Putin was elected as president of Russia in March 2000. President Putin's First Term President Putin entered the Kremlin in March 2000 with resounding popular support because of his strong stance against Chechnya. But his popularity suffered in August because of a military disaster. The Russian nuclear submarine, the Kursk, embarked on a military training exercise with the Russian naval fleet in the Barents Sea off Russia’s northwest coast in the Arctic Circle. Despite its reputation as an invincible submarine, two massive explosions rocketed through the Kursk during the exercise. The explosions sunk the Kursk. Reportedly, nearly 100 of the crew were killed in the initial explosions and subsequent fires that spread throughout the submarine. But a handful of the crew made it to one of the submarine’s compartments that had survived the blasts. There, they waited for help to arrive, and undoubtedly expected it would come. The submarine had been part of a large convoy sent to perform a military exercise. Surely, the naval fleet would notice it was missing. Moreover, the Kursk had sunk in relatively shallow, if icy, water not far from Russia’s port of Murmansk. But no help came. Delays in communication extended from the military to President Putin, who was vacationing at the time. When news reached Putin, he was slow to report it to the media, and delayed help from Western navies, despite the fact Russian military reports claimed that they heard clanging sounds coming from survivors aboard the Kursk. After a week, Putin allowed Western navies to mount a rescue attempt. But it was too late. All 118 sailors had perished when the British rescue team arrived. Anger toward the new president swept through Russia. Families demanded explanations and called for Putin’s dismissal. Fears that old Soviet practices of secrecy and cover-ups returned. Moreover, the disaster and Putin’s stagnation seemed to signal that Russia was still decades behind the West in its development. The event humiliated Putin and his popularity plummeted. In response to what he deemed excessive and inaccurate media coverage of the event, Putin clamped down on the media, and regulated coverage of the Kursk disaster. He later visited the families of the sailors who had perished and provided them with financial compensation. Since 2000, memorials have been created in honor of the men who perished aboard the Kursk. Russsia Engages the World: Putin and Foreign Affairs Vladimir Putin weathered the shock his presidency received after the Kursk disaster. In part, he survived and rebounded because of his determination to make Russia strong in the eyes of the world. In particular, he concentrated on projecting Russian strength when dealing with foreign heads of state. In the early 2000s, President Putin emphasized the need for a “multipolar” world. By this, he meant a world in which there was more than one clear center of power and influence. One beyond Western Europe and the United States. He sought to connect with the West, while also remaining committed to the idea that Russia would again be a strong world power. Simultaneously, he believed that China and other regions in Asia should be strong world actors on equal footing with the West. And he was determined to develop Russia’s connection with the East, as well as the West. Putin was initially keen to work with Western nations, including the United States. He even proposed to President Bill Clinton the idea of Russia joining NATO, by presenting a new Russia free from Soviet-era policies. Naturally, the conversation went no further. Putin also kindled a relationship with President George W. Bush. One in which the younger Bush famously quipped that he had “gotten a sense of Putin’s soul.” To Americans, it signaled hope that despite their long, adversarial relationship, Russia and the United States might be entering a new era of friendship and cooperation. Hope was further kindled when, on September 11, 2001, Putin was the first head of state to contact President George W. Bush and offer his support. He pledged Russian assistance in helping the United States and the West track down and eliminate terrorists. However, he stopped short of actually aiding the U.S. and vehemently opposed the United States’ war in Iraq. Putin’s policies toward foreign countries in the first two terms of his presidency projected more than anything, the idea that Russia was a nation willing to work with others, regardless of political divides. However, he was always careful to emphasize that the new Russia, his Russia, was strong and would operate on its own terms. He would not be in the pocket of the West, as his predecessor, Boris Yeltsin, was. Nor would he allow foreign governments to intimidate him or threaten Russia in any way. In 2005, he famously described the collapse of the Soviet Union as the “greatest geopolitical disaster of the 20th Century.” While that quote has often been used to assess actions later undertaken by him, it is equally important to showcase his message at the time. For Putin, who had grown up in the communist Soviet Union, he saw the collapse of the state as a major blow to Russian strength and international prestige. He also saw it as artificially bolstering the importance of the West over all other regions of the world. At the time of his speech, Putin wanted to alter both of those outcomes by building up Russian military strength and re-establishing Russia as a major global actor in a multipolar world. For the Good of Russia?: Putin's Domestic Policies The Russia that Putin inherited from President Yeltsin in 2000 was not one that anyone would particularly relish. Spanning eleven time zones, Russia was enormous. Much of its population was impoverished, unemployed, and frustrated. Russia writhed with violence, drugs, and crime, particularly in Moscow. The life expectancy for Russian men in the 1990s was remarkably short for an industrialized nation. In 1999, the National Institute for Health reported that the life expectancy for Russian men was 58 years. The political and economic instability of the 1990s had prompted surging alcoholism rates in Russia, primarily among men. As a result, alcohol-related deaths also surged. Along with the social ills of a massive, unstable country came an explosion in all types of crime. Organized crime, violent crime, and petty crime all exploded throughout Russia during the politically chaotic 1990s and into the early 2000s. Politically, Russia also was rife with corruption at every level. Indeed, when Putin stepped into the role as president of Russia, his work was cut out for him. And it was far from attractive. The question on everyone’s mind was simple: would Putin hold out a hand for the common, impoverished, working Russian; or, would he align with the wealthy, corrupt oligarchs whose shady business endeavors resulted in their unprecedented wealth? Ever enigmatic, few people could guess Putin’s next move, and many underestimated the political skill of their new president. Putin Tackles the Russian Economy First on Putin’s agenda of domestic affairs was stabilizing and improving the Russian economy. The task was as enormous as the country itself. From 1917 – 1991, Russia had been a communist society in which trade and industry were strictly controlled by the government. Wealth was distributed by the government to individuals and families based on need and ability. Then, almost overnight, a dramatic shift in economic policies occurred. With the collapse of the Soviet Union in 1991 came the collapse of Russian communism. In its place, a capitalist economic system was installed. The lightspeed transition produced shockwaves across Russia. Capitalism stood in direct opposition to communism. Instead of strict government regulation, capitalism favored the individual, private property, private wealth, and fierce economic competition. The transition left many ordinary Russians confused, and wondering how, and from where, they would earn enough money to support themselves. The economic crisis deepened during the rise of a group of Russian oligarchs. Hyper-wealthy, fiercely intelligent and ruthless businessmen, these individuals had obtained their wealth during the mid-1980s and early 1990s under the last Soviet leader, Mikhail Gorbachev, who allowed limited privatization of the Russian economy. These businessmen often lived abroad and brought Western products to Russia to be sold on the black market for astronomical prices. The same group of men also created vast oil and natural gas companies. As their wealth soared, so too did their political influence. Many scholars claim that the oligarchs were the actual government in the 1990s, and Boris Yeltsin a simple figurehead president. In any case, one thing about the Russian economy in the 1990s was true: it floundered. Power and vast wealth remained in the hands of a very few, shady Russian businessmen, most of whom lived abroad and had foreign bank accounts. For the remaining 99% of Russians, life proved exceptionally difficult. Putin Brings "Improvement" to the Russian Economy Vladimir Putin had to improve Russia’s economy to remain in power. One of his first acts of business was to nationalize much of Russia’s energy sector. This maneuver allowed for the growth of Russian industries for the first time in over a decade. It created jobs and dramatically reduced unemployment in Russia. Global demand and prices for Russian oil and natural gas skyrocketed, in part due to the West’s wars in Iraq and Afghanistan. Moreover, Putin was firm in his deals with the West. If Western nations wanted Russian gas and oil, they must pay Russia fairly. Much of the West complied, and to-date, Russia continues to supply most of Europe with natural gas and oil. By 2004, Russia’s economy thrived due to Putin’s regulation of the energy sector, and a massive tax reform he undertook. Unemployment dropped, and the standards of living rose sharply. With these social gains, Russian people began investing in the economy, and consumerism boomed. The middle class expanded, and wealth slowly began to be more evenly distributed. It appeared that after a relatively short time of trials, capitalism seemed destined to triumph in Russia. In 2004, Vladimir Putin also seemed destined to triumph in Russia, as he won re-election and began his second term as president. Restoration of a Dictator? Putin and the Oligarchs Much of Putin’s popularity can be boiled down to two things: strengthening Russian prestige abroad and strengthening the Russian economy at home. But his relationship with the oligarchs was complicated from the earliest days of his presidency. Behind the scenes, they facilitated his political rise and win of the presidency. But they were enormously unpopular with the Russian people. To remain popular, Putin needed to be seen challenging them. A closed-doors deal was struck between him and the billionaire businessmen. He would allow them to keep their personal wealth, assets, and companies in exchange for complete loyalty and nonintervention in government affairs. The agreement seemed, initially, to work. The oligarchs retained their wealth and pledged loyalty, and often millions of international dollars to Putin. It was corruption on a grand scale. Putin warned that any oligarch who broke away from him would be severely disciplined. Very likely, his threat seemed laughable to the oligarchs at the time who were accustomed to dealing with the malleable Yeltsin. They, like so many others, underestimated the strength and skill of their new president. Famously, Russia’s wealthiest oligarch, Mikhail Khodorkovsky, broke with Putin. Underestimating Putin as a politician and former KGB agent, Khodorkovsky was arrested at gunpoint on charges of fraud and tax evasion. He was later tried and sentenced to ten-years in prison. After eight years, and much political campaigning on his behalf, Khodorkovsky was released and allowed to live in exile in the United States. He would not be the last of the oligarchs to pay a price for breaking with Vladimir Putin. Putin's Second Term (2004-2008) From the onset of his presidency, Vladimir Putin was intensely secretive about much of his private life. His daughters were all but unknown to the world, having been only very rarely photographed. He would provide evasive answers to reporters about the sources of his wealth or how much wealth he had. Increasingly in his second term, Westerners and Russians alike pointed to his service as a KGB agent as formative experience. He would conceal much about himself, while simultaneously, (and often subtly) getting others to reveal information. He could transform himself to become who he thought his audience needed him to be. Like a chameleon, he could disguise and alter his persona to suit the situation at hand. While this might have worked well for Putin personally, it sparked unease among Russians and foreigners alike. Although Putin had enjoyed an overwhelming re-election in 2004, his second term would usher forth new fears at home that he was increasingly becoming an authoritarian leader. Mr. Putin's Wars During his time as a KGB agent, and as a young politician in Saint Petersburg, Putin witnessed the dissolution of both the Soviet Union, and the former communist state Yugoslavia. It is likely that these events impacted him deeply. From the beginning of his presidency, restoration of Russia as strong world leader has proved one of his primary goals. On the opposite side of the coin, the carving up of Russia in a manner like Yugoslavia is likely one of his great fears. Therefore, he has historically reacted harshly to any perceived threat to Russia’s progression as a world leader, be it an internal or external threat. The Beslan Hostage Crisis On September 1, 2004, a group of Islamist militants from the Caucasus region of North Ossetia, a neighbor of Chechnya, entered a school in the town of Beslan in southern Russia. They quickly took over 1,100 hostages, including teachers, students, and parents who had accompanied their children to school for a day of planned festivities. They drove the hostages into the school gym, and proceeded to rig explosives to the basketball goals, and throughout the gym. Outside the school, Russian forces mounted by the thousands, and a siege began. For three days, the captors held their hostages, shooting some of male teachers, and refusing food and drink to anyone. During the crisis, the militants demanded Russia recognize complete independence of Chechnya—a request Putin would never grant. On the third day of the siege, Russian forces were able to overwhelm the terrorists. With the help of tanks, Russian forces stormed the school. Their action defeated the terrorists, but not without heavy loss of life. At the end of the siege, more than 300 hostages had perished, most of them children. The Beslan hostage crisis provided Putin with the context he needed to further crack down on internal dissent. His response was swift and sharp. Direct election of municipal officials was removed throughout Russia. Instead, officials would be appointed directly by the Kremlin. Increasingly, Russian media reported on Putin’s crackdowns and asserted that his power stretched too far. In response, Putin launched a new crackdown that targeted the media, which, he believed, spread lies and misinformation about the government. Putin's Crackdowns: The Media and Political Opponents Following the Kursk incident in 2000, Putin launched a campaign to severely restrict all independent media outlets in Russia. The campaign was undertaken in the name of ending “misinformation” spread by these news agencies. In Putin’s view, most news reports that diverged from official, state-sponsored media outlets, constituted misinformation. Media crackdowns persisted and intensified following every major crisis experienced within Russia. In some cases, Putin ordered the arrest and imprisonment of owners of media outlets, including Russian oligarch, Vladimir Gusinsky. One by one, independent news agencies were shut down or brought under the direct control of the Russian government. For journalists who remained committed to investigating Putin and the Kremlin, their fates were frequently worse than imprisonment. Among the most famous journalists to be silenced in Russia was a woman who investigated Putin and his policies extensively, Anna Politkovskaya. Since Putin’s first invasion of Chechnya in 1999, she had reported on human rights abuses committed by Russians against the Chechens. In 2004, she published her book, Putin’s Russia, and laid bare the corruption and oppression within Putin’s presidency. Two years later, she was discovered murdered in the elevator to her apartment. Ironically, her murder occurred on October 7, 2006—Putin’s 54th birthday. Less than two months after Politkovskaya’s death, another high-profile death rocked Russia. This time, the death occurred in London. A middle-aged, former Russian security officer, Alexander Litvinenko, had died under mysterious circumstances in a London hospital. Investigations into his death by the British revealed that he was a vocal critic of Putin and that he had also leaked information from his days as a FSB security officer. Moreover, on the day he fell violently ill, Litvinenko had met with two men, proven to be Russian security agents, in a London hotel. A postmortem investigation revealed that Litvinenko had been poisoned with a radioactive element: polonium-210. One probable theory argues that the Russian agents slipped the element into Litvinenko’s tea, as traces of the substance were found in a tea pot where they had met. High levels of polonium-210 were also discovered in the hotel bar. Further investigation proved that the two Russian agents were indeed responsible for Litvinenko’s death but investigators could not link the murder directly to Putin. It would not be the last death nor high-profile poisoning conducted by Russian agents. Putin's Changing Attitude toward the West In Putin’s second term, the Russian President began to shift his tone in working with the West. He had felt for years that the West treated Russia as second class, as well as backward. These attitudes, he believed, resulted in little genuine effort from the United States or Western Europe to work with Russia. Their lack of respect and aid deeply weighed on Putin. Over the years, he increasingly distanced himself from the Western nations. As early as 2003, Putin was enormously critical of the United States’ invasion of Iraq. He spoke of flaws of Western dominance in global affairs. In 2007, Putin delivered a speech in Munich in which he sharply criticized the United States use of what he called excessive military force to enforce diplomacy with other countries, specifically those in the developing world. Similarly, icy tensions emerged between Putin and the United Kingdom. The British frequently gave asylum to political exiles from Russia, notably some of the country’s oligarchs. This policy irritated Putin. Simultaneously, the British became frustrated and concerned with Putin because of events such as the Litvinenko poisoning that had occurred within British borders. Putin and NATO Most irritating to Putin was the expansion of NATO. The North Atlantic Treaty Organization had been formed in 1949, four years into the Cold War between Russia and the West. It was created as a peacetime, military alliance between Western nations. Among other things, it promised that if any NATO country were attacked, the other NATO nations would consider it as an attack upon themselves and provide military aid. Practically, NATO was an alliance designed to protect Western nations from attacks by the Soviet Union during the Cold War. It was a very visible sign of solidarity and strength among Western Europe and the United States. The Soviet Union had responded by the creation of their equivalent: the Warsaw Pact. But after the Cold War, the Soviet Union had collapsed. Communism was defeated in Europe, and the Warsaw Pact was dissolved. Russia hoped that the West would respond by a similar dismantling of NATO. Instead, NATO membership soared in the 1990s and 2000s. Since 1999, many former Soviet Bloc countries, such as Poland and Romania, have joined NATO. The Baltic States, Estonia, Latvia, and Lithuania, all of which share a border with Russia, joined NATO. These Eastern European nations joined NATO for two reasons: memory of life under the communist Soviet Union and security against encroachment/attacks from what they perceived as growing Russian aggression in Europe during the 2000s. From Putin’s perspective, the rapid expansion of NATO, a Cold War-era entity, was a further sign that Europe was distancing itself from Russia. Why, he argued, would a Cold War-era alliance be expanding after the end of the Cold War? From whom, did they expect an attack? And why was Western Europe so quick to accept former Soviet countries into NATO membership? The spark that further ignited Putin’s fury was that NATO built military bases in Eastern European countries, namely Poland. For Putin, that was practically in Russia’s backdoor. The building of such bases, he argued, threatened the security of Russia and amplified tensions between Russia and the West to Cold War levels. Exit, Mr. Putin? The Russian constitution limited the president to two consecutive terms of four years each. In 2008, Putin’s time in office was nearly over. He could not run a third term, or extend his presidency, without risking strong opposition from the Russian people. So, Putin set out to find a suitable protégé. He found one in his long-time ally, former chief of staff, and deputy prime minister, Dmitry Medvedev. More than a decade younger than Putin, Medvedev was comparatively young. He spoke with a professional courteousness that Putin lacked. Unlike Putin, he delivered speeches that resonated with Russian intellectuals, which pulled in support from that demographic. His boyish face was warm and ingratiating—a stark contrast to austere, glacial Putin. And unlike Putin, Medvedev was inexperienced and malleable. The only problem arose from a small, but vocal minority of Russian people. In 2008, the popular chess master, Garry Kasparov, entered his candidacy for the presidency against Putin’s man, Medvedev. He ran on the campaign that Putin and his inner circle were extremely corrupt. Among other charges, he implied that Putin was an autocratic president that was severely restricting freedoms in Russia. For a small percentage of Russians, Kasparov’s claims struck home. Even Putin-supporters could not deny that he had worked to severely restrict and regulate the media in Russia. Moreover, Russians had concerns about voter fraud and the legitimacy of Russian elections. During his campaign, Kasparov gained a strong following. But his campaign was often halted by his periodic arrests for demonstrating against Putin. Late in 2007, Kasparov withdrew from the race, frustrated by the endless roadblocks he faced in campaigning against Putin’s government. Although he failed to successfully run for the presidency, he had succeeded in raising awareness of Putin’s increasing corruption and authoritarianism in two critical ways: firstly, larger numbers of Russians were questioning Putin and his policies; and secondly, Kasparov’s failed campaign pointed to the severe authoritarianism in the state. At the time, Kasparov was handsome, wealthy, and internationally famous. If the Russian government could create such hurdles to stop his campaign, Kasparov’s point was spot-on. Russia was far from free. According to voter records, Putin left the presidency in 2008 with overwhelming popular support. It was therefore unsurprising that his hand-picked successor also experienced strong support during his election campaign and ultimately won the presidential election. In 2008, Russia and the World welcomed the new president of Russia, Dmitry Medvedev. But within hours of his oath-taking, Medvedev named Putin his prime minister. The act resulted in many asking, “Who is the real Russian president?” The Cozy President and the Glacial Prime Minister: Dmitry Medvedev and Vladimir Putin (2008-2012) From the moment Dmitry Medvedev stepped into the presidency it became impossible to separate him from Vladimir Putin. At home and abroad, people described Medvedev’s presidency as a “tandem” presidency in which decision-making was shared between the two men. Speculation arose that Medvedev had never had presidential aspirations, that he simply bent to Putin’s plan. The Russo-Georgian War Regardless of how he had come to power, Medvedev faced his first crisis only months after coming to power. Just south of the Russian Caucasus was the small, independent country of Georgia. In August 2008, there were longstanding tensions between Georgia and Russia over small provinces in the Caucasus, South Ossetia, and Abkhazia. Violence erupted between South Ossetian troops and Georgian troops. Russian troops soon joined the conflict on the side of South Ossetia and launched an attack on Georgia. Within five days, the war was over. Georgia had been beaten into submission by Russian forces that had illegally invaded their country, and the Caucasus provinces remained firmly in Russian hands. Despite the briefness of the war, it resulted in mass displacement for Georgian civilians. Moreover, the lack of international response to Russia’s illegal invasion emboldened Russia. Medvedev was at the helm, with Putin pulling his strings. And prime minister Putin would not forget the lack of Western response to their invasion of Georgia. The Global Financial Crisis and Medvedev's Foreign Diplomacy Medvedev’s second challenge arrived in 2008 during the global economic recession. Russian GDP dropped sharply. For Russia, the recession proved a preview of the dangers of a mono-industry nation. Russia relied heavily on its gas and oil exports to drive its economy. But if those industries collapsed, Russia would suffer enormously. For his part, Medvedev spoke of diversifying the Russian economy. He advocated for development in the sectors of information and medical technology. But in his four years as president, little was done to promote a diversified economy. Russia weathered the 2008 economic crisis but it was not until two years later that the economy began to recover and grow. In many ways, Medvedev mimicked Putin. Some noted that even in his speaking, Medvedev’s intonation was like Putin’s. In policy, Medvedev was like Putin, also. During his presidency, he increasingly turned away from the West and toward the East. In 2009, his Foreign Minister, Sergei Lavrov, famously received a “present” from United States Secretary of State, Hillary Clinton—a box with a bright red button. Inaccurately written in Russian, the button was supposed to say “Reset.” Lavrov quickly pointed out that the Americans had used an inaccurate Russian word, and that it said “overcharge.” But despite the brief humiliation, the intention was clear to both parties. The United States recognized that the two countries were at odds. A “resetting” of relations needed to occur. It was clear, at that time, that Russia was turning eastward instead of to the West. And President Medvedev developed working relationships with some of the world’s most notorious heads of state, including Kim Jong Il of North Korea, Hugo Chavez of Venezuela, and Fidel Castro. The Forgotten President? President Medvedev announced in 2012 that he would not run for re-election. Instead, he threw his support to his prime minister. Vladimir Putin, he proclaimed, would be an excellent choice of candidates for president. No one in Russia or abroad seemed surprised by the announcement. But in Russia, thousands protested what they deemed an increasingly corrupt government. Among those leading protests was a thirty-five-year-old lawyer and YouTube blogger, Alexei Navalny. Very successfully, he labeled Putin’s political party, United Russia, the party of “crooks and thieves.” He used the internet to broadcast evidence of Putin’s “stolen” wealth—vast palaces and yachts. He also advocated for a free and democratic Russia, something unknown in Putin’s Russia. Through his media platforms, Navalny garnered millions of supporters. For the first time since 1999, Putin had a strong political opponent. It was a momentous start for Navalny. Within two years, he would lead the Russian opposition against Vladimir Putin, as well as become an international household name. In 2012, Medvedev exited the presidency and became Vladimir Putin’s prime minister. While it is still debated to what extent Medvedev’s presidency was really his own, overwhelming opinion speculates that Putin was heavily involved all along. As Medvedev left the presidency, so too did public memory of him. Within five years of his exiting the presidency, Medvedev was largely forgotten by most of the world. Return of President Putin: 2013-Present Vladimir Putin returned to the president’s office in May 2012. Mass protests occurred during his installation as president. Protestors decried the election and claimed rampant voter fraud took place. In response to the protestors, Russian police arrested thousands. The arrests resulted in dozens of international organizations declaring that human rights were being violated en masse in Russia. Over the next decade, organizations and countries around the world would cry that Russia erased basic human freedoms. Indeed, during his third and fourth terms, Putin would increasingly implement measures to crack down on dissension in his autocratic Russia. Ever at the forefront of his thoughts was a great coin. On one side was the goal of promoting Russia as a major power in global affairs, on equal standing with the West. On the flip side of the coin was the fear that Russia would retreat from the world stage, and/or be dismantled as the Soviet Union and Yugoslavia had been. Together, these thoughts created a strong, unyielding Russian nationalist in Putin. A characteristic that would set him on a collision course with Western powers in 2014, and again in 2022 when he launched invasions of Ukraine. Attributions Images courtesy of Wikimedia Commons Gessen, Masha. The Man without a Face: The Unlikely Rise of Vladimir Putin. Penguin Books, New York: 2014. Lourie, Richard. Putin: His Downfall and Russia’s Coming Crash. Thomas Dunne Books, New York: 2017.
oercommons
2025-03-18T00:35:09.286565
Neil Greenwood
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/88115/overview", "title": "Statewide Dual Credit World History, The Catastrophe of the Modern Era: 1919-Present CE, Chapter 17: Post-Cold War International Structure, The Chameleon in the Kremlin: Contemporary Russia under Putin", "author": "Anna McCollum" }
https://oercommons.org/courseware/lesson/88105/overview
Genocide and Ethnic Cleansing in the Postwar Era Overview Ethnic Cleansing and Genocide since World War II: The Cambodian Genocide As the news and scope of the Holocaust came to public light following World War II, and especially in the 1960s, it seemed that the world would never again engage in such barbaric and inhumane treatment. Surely, the death of 6 million Jews had taught humanity that we must never again engage in genocide. And yet, tragically, the moral lessons of the Holocaust all-too-soon faded. The promise of “Never again!” transformed into a crude reality, “Not again!”, as genocide unfolded in the Pacific, Asia, Europe, and Africa in the latter half of the twentieth century. Learning Objectives - Identify the importance of the Cambodian Genocide. Key Terms / Key Concepts Cambodia: country in Southeast Asia between Vietnam and Thailand Cambodian Genocide: 1975 – 1979 event in which large portions of the middle and upper classes, as well as minorities, were exterminated in Cambodia by the Khmer Rouge Ethnic cleansing: forced removal of a population from an area, usually violently Genocide: term created by Raphael Lemkin in 1944, and taken from Greek and Latin roots, to mean literally the “killing of people” Khmer Rouge: communist party in Cambodia from 1975 – 1979 that was responsible for the Cambodian Genocide Killing fields: sites throughout Cambodia in which hundreds of thousands of Cambodian civilians were murdered by the Khmer Rouge Pol Pot: Communist leader of Cambodia from 1963 – 1981 Ethnic Cleansing vs Genocide: Background Nearly eighty years have passed since the end of World War II when the world first learned of the Holocaust. Since then, multiple cases of genocide and ethnic cleansing have occurred around the world. The term genocide is relatively young. After much effort, Polish lawyer, Raphael Lemkin coined the term in 1944. Genocide literally translates to the “killing of people.” In particular, the term implies a deliberate, centralized attempt to systematically destroy an entire race, ethnic group, or group of people. The Holocaust is the most commonly agreed-upon genocide in history. Nations and governments are often skeptical of applying the word “genocide” to other events, such as the Armenian Genocide, because it implies that a government was deliberately involved in the planning and murder of a group of people. For this reason, many genocides are still contested. Closely tied to genocide is the concept of ethnic cleansing. In contrast to genocide, which is intent on the systematic destruction of a specific group of people, ethnic cleaning is a term used to describe the forced removal of a group of people from a specific area to make the area homogenous. Under international law, ethnic cleansing constitutes a war crime because the targeted groups are often subjected to brutal treatment, poor living conditions, physical abuse, and destruction of property. Cambodian Genocide During the Cold War, Chinese leader Mao Zedong supported communist leaders throughout the world. In the 1970s, he strongly supported neighboring leader, Pol Pot, who came to power in Cambodia with his political party: the Khmer Rouge. The goal of the new, communist government was the creation of an entirely self-sufficient agrarian society. To achieve this, the Khmer Rouge launched a campaign of eradication. Men, women, and children of the middle and upper classes were targeted. Among the groups targeted were individuals connected to the previous government, intellectuals, monks, and professional people such as doctors, lawyers, businessmen, and journalists. Racial and religious minorities were also targeted. All of these groups were considered “subversive” by the Khmer Rouge and a potential threat to their communist state. Victims were arrested, and frequently summarily killed. Most famously, the Khmer Rouge took their victims to the killing fields where mass murders unfolded. Hundreds of thousands of people were killed, often by pickax or machete to conserve ammunition. Across the country, dozens of sites have been discovered where these mass murders occurred. Based on the findings, historians estimate that 1.5 – 2 million people were systematically murdered by the Khmer Rouge between 1975 – 1979. The killings finally ceased in 1979 when the Vietnamese Army invaded Cambodia and overthrew the Khmer Rouge. The Cambodian Genocide remains one of the most definitive examples of genocide since the Holocaust. It was a four-year campaign to not only systematically eradicate specific groups of people but also can be considered a classicide in which there were attempts to destroy the educated and professional, middle and upper classes. Ethnic Cleansing and Genocide since World War II: The Bosnian Genocide Learning Objectives Identify the importance of the Bosnian Genocide Key Terms / Key Concepts Balkans: group of countries in southeast Europe bordering the Aegean, Black, and Adriatic Seas Bosniak: Bosnian-Muslim Croatian War of Independence: 1991 – 1995 conflict between Croatia and Serbia Dayton Accords: 1995 cease-fire agreement that established the Republika Srpska in Bosnia and Herzegovina Republika Srpska: political territory established inside Bosnia Herzegovina’s southern and eastern regions that is predominately inhabited by Serbs Slobodan Milošević: Serbian leader 1989 – 2000 who was instrumental in facilitating the Yugoslav Wars and Bosnian Genocide Srebrenica: site of one of the most infamous mass murders in the Bosnian Genocide Tito: communist leader of Yugoslavia from 1944 – 1980 Yugoslavia: communist nation comprised of six Balkan countries from 1945 – 1991 Yugoslav Wars: series of conflicts following between Serbia, Croatia, Bosnia, and other Balkan nations from 1991 – 2001 Vukovar: city in eastern Croatia that experienced heinous war crimes in the Croatian War of Independence The Yugoslav Wars If the Holocaust introduced the world to the concept of genocide, the Yugoslav Wars of the late 1980s and 1990s made the world more aware of ethnic cleansing. Background In southeast Europe there are a stretch of countries collectively known as the Balkans. These countries include Greece, Albania, Croatia, Serbia, Bosnia, Macedonia, and a handful of others. These countries historically are rich in language, religion, and culture. Because of their ethnic and religious diversity, as well as territory squabbles, countries in the Balkans also have a long history of conflict in and among themselves. During the Cold War, political strongman, Tito, came to power and the Balkans were united under the communist banner. Six countries formed the Cold War state of Yugoslavia: Bosnia and Herzegovina, Serbia, Croatia, Macedonia, Montenegro, and Slovenia. While under Tito’s rule, Yugoslavia remained united, despite religious and ethnic tension. Predominately, the Serbs had been Eastern Orthodox, the Croats were Catholic, and the Bosnians were Muslim. These religious differences would set the stage for the Yugoslav wars. While Tito remained in power, the six nations cooperated. Anyone who dissented was summarily dismissed from their post by Tito. Tito’s policy of “state above all else” was extremely popular. And when he died in 1980, millions throughout Yugoslavia mourned his death. Tito’s successors tried desperately to keep the Yugoslav state together. Despite their efforts, old rivalries and tensions soon emerged. As one of Tito’s successors remarked, “Will we all just go back to shooting each other?” In 1987, a Serbian socialist politician entered the forefront of the Balkan stage: Slobodan Milošević. More than any other man, he divided people in the Balkans by promoting Serbian nationalism. Slowly, he transformed the Yugoslav Army into a predominately Serbian Army, and he refueled old hatreds between groups. To achieve his purpose of promoting Serbian dominance in the Balkans, and Serbian nationalism at home, Slobodan Milošević masterfully employed the use of television. He broadcast messages to the Serbian people, showing images of the developing Serbian military. Famously, when Serbian people were beaten by policemen from Kosovo and Albania, the videographer caught Milošević’s response as he lifted a Serb to his feet, “You will not be beaten again.” The image of the downtrodden Serbs being physically lifted by Milošević before their rivals sent a power signal throughout Serbia. At home, Milošević’s popularity skyrocketed, and he was hailed the next Tito. But Milošević had bigger ambitions for Serbia than unity with its neighbors. His chance to elevate Serbia’s place in the Balkans came in 1991 when the Soviet Union, and by extension, the Yugoslav state, collapsed. The Yugoslav Wars Begin The collapse of the Soviet Union struck Serbia and Milošević hard. Serbia and Russia had a deep history of economic and political alliances. And for Milošević, it became clear he would never become the next Tito. He would have to settle for president of Serbia. But he would make sure the Balkans remembered him too. In 1991, Croatia voted for independence. A year later, Bosnia did the same. The acts infuriated Milošević because both countries had large Serbian populations. He would not allow them to become independent nations with so many of “his people” without a fight. Croatian War of Independence In the spring of 1991, skirmishes between the Serbs and the Croats began, mostly in Croatian territory where there was a strong Serbian minority. By the summer, Croatia had officially declared independence. In response, the Serbian army escalated the conflict. They launched violent attacks against the Croats on the battlefields, but also launched furious bombardments of their cities, including Dubrovnik. In November 1991, the Serbian forces surrounded the Croatian city of Vukovar. Soldiers and citizens were subjected to heavy artillery bombardment from the advancing Serbian army. Severely outnumbered, the Croatian forces were quickly overrun. Inside the city, Serbian soldiers executed the Croatian soldiers and civilians at will. Many soldiers and civilians had sheltered in the Vukovar hospital. The Serbian army secured the hospital and quickly seized two hundred people, mostly civilians. They were then transported to a pig farm outside the city and shot. Additional mass murders followed. The non-Serbian civilians who survived the massacres were forced from the city by the Serbians who drove them into ramshackle concentration camps lined with barbed wire. Deplorable conditions existed inside the camps, and thousands of Croats perished either from disease and malnourishment, or executions. The removal of all non-Serbian peoples was undertaken to ethnically cleanse the city. Vukovar, which sustained heavy damage, was the first European city to be mostly destroyed in war since 1945. The Croatian War for Independence, the first major Yugoslav War, ended in 1995 after two successful offensive campaigns launched by the Croats. In four years, Croatia had lost 20,000 people and roughly a quarter of their economy was devastated. It appeared as if their experiment as an independent, democratic nation was off to a shaky start. The Bosnian War The Bosnian War far exceeded the violence of the Croatian War for Independence. In the Bosnian War, the Serbian army launched an all-out campaign to eradicate the Bosniaks—Bosnian Muslims. In the summer of 1992, Europe again saw genocide, less than fifty years after the end of World War II. Serbian nationalists promised to “liberate” towns and cities throughout Bosnia. In this mission, they employed thousands of Serbian troops. In addition to the army, they recruited paramilitary forces, including the highly feared Arkan’s Tigers. Under the command of the popular, Željko Ražnatović (Arkan)—an international mobster on Interpol’s Most Wanted List—the Tigers were feared because of their readiness to carry-out excessively brutal murders against both Croats and Bosniaks. Overnight, Serbians living in Bosnia turned against their Bosniak neighbors. Many joined the Serbian armed forces and paramilitary groups. These neighbors later participated in the round up and mass murder of the Bosniaks. Ethnic cleansing of Bosnia, rapes, looting, and prolonged sieges of major cities, like Sarajevo, reigned in Bosnia. Across the country, deplorable concentration camps sprang up to contain the Bosniak and Croat prisoners. Across Bosnia, Serbians engaged in mass executions, primarily of Bosniaks. The most famous of the mass murders occurred at Srebrenica, on the eastern border of Bosnia near Serbia. An estimated 8,000 Bosniak boys and men were murdered and then thrown into mass graves. The conflict in Bosnia ended in 1995 with the Dayton Accords. The peace treaty was signed at Warren Air Force Base outside of Dayton, Ohio. Chief to the success of the treaty was U.S. peace negotiator, Richard Holbrooke, and the American Secretary of State. The treaty established a separate, largely Serbian region in Bosnia, the Republika Srpska. Located on Bosnia’s southern and eastern sides, Republika Srpska remains largely inhabited by Serbians. The other side of Bosnia Herzegovina is home predominately to Croats and Bosniaks. Kosovo The Yugoslav Wars did not end until the early 2000s. In 1998 – 1999, war erupted in Kosovo—to the southeast of Bosnia. While Kosovars declared autonomy, they were challenged and attacked by forces from Serbia and Montenegro. Fears that Kosovars were being persecuted by the attacking forces escalated. In 1999, NATO launched a series of airstrikes over Serbia. For nearly three months, NATO forces bombed parts of Serbia and Montenegro until Serbian and Montenegrin forces withdrew. The NATO actions remain controversial because their airstrikes were never approved by the UN Security Council, having been vetoed by China and Russia. Moreover, the maneuver was technically illegal because no NATO nation had been directly attacked. Instead, the NATO airstrikes were conducted in the name of protecting humanity, without official approval from the United Nations. Tension and insurgencies persisted in Kosovo and Montenegro into the early 2000s. Today, the region remains unsettled, and comprised of six independent nations. For their parts in the Yugoslav Wars, Slobodan Milošević and his inner circle were charged with war crimes, crimes against peace, and crimes against humanity. Before Milošević could stand trial, he died in prison of a heart condition. His close associates Ratko Mladić and Radovan Karadžić were convicted of war crimes by The Hague and sentenced to life in prison. Ethnic Cleansing and Genocide since World War II: The Rwandan Genocide Learning Objectives Identify the importance of the Rwandan Genocide. Key Terms / Key Concepts Hutu: ethnic, majority group in Rwanda in the 1990s Rwanda: landlocked country east of Congo in Africa Rwandan Genocide: mass slaughter of Tutsis by Hutus and their collaborators in 1994 Rwandan Patriotic Front: Tutsi-led force that fought against the Hutus during the Rwandan Genocide Tutsi: ethnic, minority group in Rwanda in the 1990s The Rwandan Genocide The Rwandan Genocide was the mass slaughter of Tutsi people in Rwanda by members of the Hutu majority government in 1994. An estimated 500,000 to one million Rwandans were killed during the 100-day period from April 7 to mid-July 1994, constituting as many as 70% of the Tutsi population and 20% of Rwanda’s overall population. Preparation for Genocide Historians do not agree on a precise date on which the idea of a “final solution” to kill every Tutsi in Rwanda was introduced. The Rwandan army began training Hutu youth in combat and arming civilians with weapons, such as machetes, in 1990. Rwanda also purchased large numbers of grenades and munitions starting in late 1990. The Rwandan Armed Forces (FAR) also expanded rapidly during this time, growing from fewer than 10,000 troops to almost 30,000 in one year. Throughout 1993, far right nationalists imported machetes from China on a scale far larger than required for agriculture, as well as other tools that could be used as weapons, such as razor blades, saws, and scissors. These tools were distributed around the country, ostensibly as part of the civil defense network. And, in March 1993, the Hutu Power group began compiling lists of “traitors” who they planned to kill. In October 1993, the President of Burundi, Melchior Ndadaye, who had been elected in June as the country’s first ever Hutu president, was assassinated by extremist Tutsi army officers. The assassination caused shock waves throughout the country, reinforcing the notion among Hutus that the Tutsi were their enemy and could not be trusted. The idea of a Tutsi “final solution” now occupied the top of Hutu party agendas and was actively planned. The Hutu Power groups were confident of persuading the Hutu population to carry out killings given the public anger at Ndadaye’s murder, the persuasiveness of propaganda, and the traditional obedience of Rwandans to authority. Power leaders began arming militia groups with AK-47s and other weapons, whereas previously they possessed only machetes and traditional hand weapons. Assassination of Habyarimana On April 6, 1994, the airplane carrying President Habyarimana and Cyprien Ntaryamira, the Hutu president of Burundi—was shot down as it prepared to land in Kigali, killing everyone on board. Responsibility for the attack was disputed. Despite disagreements about the perpetrators, the attack and deaths of the two Hutu presidents served as the catalyst for the subsequent Tutsi genocide. Genocide The genocide itself began on April 7, 199. The commanders announced the president’s death, blamed the Tutsis, and then ordered the crowd to begin killing Tutsi people. The genocide quickly spread throughout the country. For the remainder of April and early May, the Presidential Guard—gendarmerie —and youth militias continued killing at very high rates. These groups were aided by local populations, as Hutu neighbors turned on their Tutsi neighbors. Historian Gerard Prunier estimates in his book, The Rwanda Crisis, that up to 800,000 Rwandans were murdered during the first six weeks of the genocide. That statistic represents a rate of killing that was five times higher than the Holocaust in terms of time. The goal of the genocide was to kill every Tutsi living in Rwanda, and except for the advancing Rwandan Patriotic Front (RPF), there was no opposition force to prevent or slow the killings. Escape proved nearly impossible. Each person who encountered a roadblock was required to show their national identity card that included ethnicity, and anyone carrying a Tutsi card was slaughtered immediately. Many Hutus were also killed for a variety of reasons, including demonstrating sympathy for moderate opposition parties, being a journalist, or simply appearing Tutsi. The RPF made slow and steady gains in the north and east of the country, ending killings in each area they occupied. Impact Given the chaotic nature of the situation, there is no consensus on the number of people killed during the genocide. Unlike the genocides carried out by Nazi Germany or the Khmer Rouge in Cambodia, authorities made no attempts to document or systematize deaths. The succeeding RPF government has stated that 1,071,000 were killed in 100 days, 10% of whom were Hutu. Based on those statistics, it could be derived that 10,000 people were murdered every day. End of Rwandan Genocide The infrastructure and economy of Rwanda suffered greatly during the genocide. Many buildings were uninhabitable, and the former regime had taken all currency and movable assets when they fled the country. Human resources were also severely depleted, with over 40% of the population having been killed or fled. Many of the remainder were traumatized: most had lost relatives, witnessed killings, or participated in the genocide. In 1994, the UN established a criminal tribunal to try Rwandan war criminals. It convicted 85 individuals. Rwandan courts also tried individuals for their participation in war crimes during the genocide. Attributions Images Courtesy of Wikimedia Commons Boundless World History “Rwandan Genocide” https://courses.lumenlearning.com/boundless-worldhistory/chapter/the-rwandan-genocide/ https://creativecommons.org/licenses/by-sa/4.0/ The Holocaust and other Genocides: History, Representation, and Ethics. Helmut Walser Smith, Ed. Vanderbilt Publishing; Nashville, TN: 2002.
oercommons
2025-03-18T00:35:09.343328
Neil Greenwood
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/88105/overview", "title": "Statewide Dual Credit World History, The Catastrophe of the Modern Era: 1919-Present CE, Chapter 16: Globalization, Genocide and Ethnic Cleansing in the Postwar Era", "author": "Anna McCollum" }
https://oercommons.org/courseware/lesson/87982/overview
Challenges of Interwar Latin America Overview United States Good Neighbor Policy Latin America experienced a significant turn in the middle to early 20th century. The majority of this turn was because of economic policies such as the Bracero Program and Import Substitution Industrialization. In many parts of the Latin American culture saw also a cultural and political programs. Learning Objectives - Evaluate the role of World War II on Latin America. - Analyze the responses of Latin American leaders to the United States in the interwar period. - Evaluate the Good Neighbor Policy on Latin America. Key Terms / Key Concepts Bracero Program: a series of laws and diplomatic agreements initiated on August 4, 1942, that guaranteed basic human rights and a minimum wage of 30 cents an hour to temporary contract laborers traveling from Mexico to the United States The Good Neighbor Policy Manifest Destiny from the 19th century put forward a unique relationship that the United States was centrally interested in Latin America as a site of expansion and growth. Throughout the late 19th to early 20th centuries, Latin America was seen by the United States as part of a broader cultural and economic sphere. Latin American countries were to adhere to US policies and ideas. This is best illustrated with Cuba: the relationship established with the United States in the Cuban Constitution allows the United States to go to Cuba at any time they feel is necessary. The interwar period changed this relationship between Latin America and the United States. Following World War I, the United States followed a policy of isolationism. President Franklin Delano Roosevelt’s administration changed this, when they started pushing for a policy known as the Good Neighbor Policy. The premise behind the policy was that a good neighbor does not go into someone’s house and try to fix problems; instead, a good neighbor will stand at a point and tell you that there are problems in your home. This resulted in a unique relationship between the United States and Latin American states. The Good Neighbor Policy meant that Latin American states began to find a rhythm and rhyme that worked better for their own people and governments, as there was far less external pressure to deal with. The freedom to explore what made policy sense in Argentina or Cuba, without United States or other foreign interference, meant the development of unique policy goals in each of these states. At this juncture, many Latin American states experienced the opportunity to grow their own ideas and agendas. The Good Neighbor Policy came about because of the economic downturn of the Great Depression, a depression that also affected Latin American nations. Many of the Latin American states suffered economically. Mexico, for example, struggled to find ways to find money and resources throughout this period. Chile and Peruvian goods did not find markets in this 1930s. While there was a bit of openness in culture and economics that happened in the 1930s, this was limited in scope and scale due to the economic collapse of the Great Depression. Though, out of that catastrophe came a unique economic model that many Latin American countries began to move towards. In the colonial and 19th century period, Latin America was the site of production of raw materials to sell directly to Europe. This meant that Brazil grew massive amounts of coffee and sugar to sell to European markets. Bananas from Central America were sold directly to consumers in North America and Europe. While Europeans often were selling higher end finished products in return. The industrialized worker of Germany, for example, would sell machine guns to Chile. The problem is that these raw products were relatively cheap in comparison to the finished product’s price. This imbalance of trade caused a problem, because as more finished goods became a part of the market, Latin American states were struggling to keep up with purchasing them. The problem is how expensive cars would be while bananas are very inexpensive, this would mean that there was many bananas that it would take to purchase a car. The limits of interference of US government meant that Latin America could start to explore how to make and manufacture their own materials. Many Latin America economists started to critically think about how to change this system of trade imbalance during the interwar period. Economist Raúl Prebisch began exploring the idea of Latin American governments pushing consumers to change their trade behaviors. The policy that Prebisch began to call for is known as Import Substitution Industrialization, or more commonly ISI. Prebisch argued that instead of buying finished goods, Latin American governments should start buying the machines to make their own finished goods. This would mean that, instead of buying cars from Italy or Germany, Argentines would buy the machines to make their own cars. This shift was an important one because this became the model of Latin American governments throughout the 1920s and 30s. Mexico Key Terms / Key Concepts Democratic Current: a movement within the PRI founded in 1986 that criticized the federal government for reducing spending on social programs to increase payments on foreign debt (PRI members who participated in the Democratic Current were expelled from the party and formed the National Democratic Front (FDN).) habeas corpus: a writ requiring a person under arrest to be brought before a judge or into court, especially to secure the person's release unless lawful grounds are shown for their detention import substitution industrialization: a trade and economic policy that advocates replacing foreign imports with domestic production National Revolutionary Party: the Mexican political party founded in 1929 that held executive power within the country for an uninterrupted 71 years (It underwent two name changes during its time in power: once in 1938, to Partido de la Revolucion Mexican (PRM), and again in 1946, to Partido Revolucionario Institucional (PRI). Corruption and Opposing Political Parties As in previous regimes, the PRM retained its hold over the electorate due to massive electoral fraud. Toward the end of every president’s term, consultations with party leaders would take place and the PRM’s next candidate would be selected. In other words, the incumbent president would pick his successor. To support the party’s dominance in the executive branch of government, the PRM sought dominance at other levels as well. It held an overwhelming majority in the Chamber of Deputies, as well as every seat in the Senate and every state governorship. As a result, the PRM became a symbol over time of corruption, including voter suppression and violence. In 1986, Cuauhtemoc Cardenas—the former Governor of Michoacan and son of the former president Lazaro Cardenas—formed the Democratic Current, which criticized the federal government for reducing spending on social programs to increase payments on foreign debt. Members of the Democratic Current were expelled from the party, and in 1987, they formed the National Democratic Front, or Frente Democratico Nacional (FDN). In 1989, the left wing of the PRM, now called Partido Revolucionario Institucional, or PRI, went on to form its own party called the Party of the Democratic Revolution. The conservative National Action Party, likewise, grew after 1976 when it obtained support from the business sector in light of recurring economic crises. The growth of both these opposition parties resulted in the PRI losing the presidency in 2000. The Mexican Economic Miracle The Mexican Economic Miracle refers to the country’s inward-focused development strategy, which produced sustained economic growth of 3-4 percent with modest 3 percent inflation annually from the 1940s until the 1970s. Creating the Conditions for Growth The reduction of political turmoil that accompanied national elections during and immediately after the Mexican Revolution was an important factor in laying the groundwork for economic growth. This was achieved by the establishment of a single, dominant political party that subsumed clashes between various interest groups within the framework of a unified party machine. During the presidency of Lazaro Cardenas, significant policies were enacted in the social and political spheres that had major impacts on the economic policies of the country. For instance, Cardenas nationalized oil concerns in 1938. He also nationalized Mexico’s railways and initiated far-reaching land reform. Some of these policies were carried on, albeit more moderately, by Manuel Avila Camacho, who succeeded him to the presidency. Camacho initiated a program of industrialization in early 1941 with the Law of Manufacturing Industries, famous for beginning the process of import-substitution within Mexico. Then in 1946, President Miguel Aleman Valdes passed the Law for Development of New and Necessary Industries, continuing the trend of inward-focused development strategies. Growth was sustained by Mexico’s increasing commitment to primary education for its general population. The primary school enrollment rate increased threefold from the late 1920s through to the 1940s, making economic output more productive by the 1940s. Mexico also made investments in higher education during this period, which encouraged a generation of scientists and engineers to enable new levels of industrial innovation. For instance, in 1936 the Instituto Politecnico Nacional was founded in the northern part of Mexico City. Also in northern Mexico, the Monterrey Institute of Technology and High Education was founded in 1942. World War II Mexico benefited substantially from World War II by supplying labor and materials to the Allies. For instance, in the U.S. the Bracero Program was a series of laws and diplomatic agreements initiated on August 4, 1942, that guaranteed basic human rights and a minimum wage of 30 cents an hour to temporary contract laborers who came to the United States from Mexico. Braceros—meaning manual laborer, literally “one who works using his arms”—were intended to fill the U.S. labor shortage in agriculture that was occurring because farmers were drafted into service. The program outlasted the war and offered employment contracts to 5 million braceros in 24 U.S. states, making it the largest foreign worker program in U.S. history. Mexico also received cash payments for its contributions of materials useful to the war effort, which infused its treasury with reserves. There was a large economic resources that helped to build up after the war, Mexico was able to embark on large infrastructure projects. Camacho used part of the accumulated savings from the war to pay off foreign debts, which improved Mexico’s credit substantially and increased investors’ confidence in the government. The government was also in a better position to more widely distribute material benefits from the Revolution, given the robust revenues from the war effort. Camacho used funds to subsidize food imports that affected urban workers. Mexican workers also received high salaries during the war, but due to the lack of consumer goods, spending did not increase substantially. The national development bank, Nacional Financiera, was founded under Camacho’s administration and funded the expansion of the industrial sector. Import-Substitution and Infrastructure Projects The economic stability of the country, high credit rating, increasingly educated work force, and savings from the war provided excellent conditions under which to begin a program of import substitution industrialization. In the years following World War II, President Miguel Aleman Valdes (1946 – 52) instituted a full-scale import-substitution program that stimulated output by boosting internal demand. The government raised import controls on consumer goods but relaxed them on capital goods such as machinery. Capital goods were then purchased using international reserves accumulated during the war and used to produce consumer goods domestically. One industry that was particularly successful was textile production. Mexico became a desirable location for foreign transnational companies like Coca-Cola, Pepsi-Cola, and Sears to establish manufacturing branches during this period. The share of imports subject to licensing requirements rose from 28 percent in 1956 to more than 60 percent on average during the 1960s and approximately 70 percent during the 1970s. Industry accounted for 22 percent of total output in 1950, 24 percent in 1960, and 29 percent in 1970.Meanwhile, the share of total output arising from agriculture and other primary activities declined during the same period. The Mexican government promoted industrial expansion through public investment in agricultural, energy, and transportation infrastructure. Cities grew rapidly after 1940, reflecting the shift of employment towards industrial and service centers rather than agriculture. To sustain these population changes, the government invested in major dam projects to produce hydroelectric power, supply drinking water to cities and irrigation water to agriculture, and control flooding. By 1950, Mexico’s road network had also expanded to 21,000 kilometers, some 13,600 of which were paved. Mexico’s strong economic performance continued into the 1960s when GDP growth averaged around seven percent overall and approximately three percent per capita. Consumer price inflation also only averaged about three percent annually. Manufacturing remained the country’s dominant growth sector, expanding seven percent annually and attracting considerable foreign investment. By 1970, Mexico diversified its export base and became largely self-sufficient in food crops, steel, and most consumer goods. Although imports remained high, most were capital goods used to expand domestic production. Brazil Key Terms / Key Concepts Brazilian Miracle: a period of exceptional economic growth in Brazil during the rule of the Brazilian military government, which reached its peak during the tenure of President Emilio Garrastazu Medici from 1969 to 1973 (During this time, average annual GDP growth was close to 10%.) coronelismo: the Brazilian political machine during the Old Republic that was responsible for the centralization of political power in the hands of locally dominant oligarchs, known as coronels, who would dispense favors in return for loyalty latifúndios: an extensive parcel of privately owned land, particularly landed estates that specialized in agriculture for export The Old Republic Governance in Brazil’s Old Republic wavered between state autonomy and centralization. The First Brazilian Republic, or Old Republic, covers a period of Brazilian history from 1889 to 1930 during which it was governed a constitutional democracy. Democracy, however, was nominal in the republic. In reality, elections were rigged and voters in rural areas were pressured to vote for their bosses’ chosen candidates. If that method did not work, the election results could still be changed by one-sided decisions of Congress’s verification of powers commission (election authorities in the República Velha were not independent from the executive and the Legislature, but dominated by the ruling oligarchs). As a result, the presidency of Brazil during this period alternated between the oligarchies of the dominant states of Sao Paulo and Minas Gerais. The regime is often referred to as “café com leite,” or “coffee with milk,” after the respective agricultural products of the two states. Brazil’s Old Republic was not an ideological offspring of the republics of the French or American Revolutions, although the regime would attempt to associate itself with both. The republic did not have enough popular support to risk open elections and was born of a coup d’etat that maintained itself by force. The republicans made Field Marshal Deodoro da Fonseca president (1889 – 91) and after a financial crisis, appointed Field Marshal Floriano Vieira Peixoto the Minister of War to ensure the allegiance of the military. Rule of the Landed Oligarchies The history of the Old Republic is dominated by a quest to find a viable form of government to replace the preceding monarchy. This quest swung Brazil back and forth between state autonomy and centralization. The constitution of 1891 established the United States of Brazil and granted extensive autonomy to the provinces, now called states. The federal system was adopted, and all powers not explicitly granted to the federal government in the constitution were delegated to the states. Over time, extending as far as the 1920s, the federal government in Rio de Janeiro was dominated and managed by a combination of the more powerful Brazilian states: Sao Paulo, Minas Gerais, Rio Grande do Sul, and to a lesser extent Pernambuco and Bahia. The sudden elimination of the monarchy left the military as Brazil’s only viable, dominant institution. As a result, the military developed as a national regulatory and interventionist institution within the republic. Although the Roman Catholic Church maintained a presence, it remained primarily international in its personnel, doctrine, liturgy, and purposes. The Army began to eclipse other military institutions, such as the Navy and the National Guard. However, the armed forces, were divided over their status, relationship to the political regime, and institutional goals. Therefore, the lack of military unity and disagreement among civilian elites regarding the military’s role in society prevented the establishment of a long-term military dictatorship within the country. The Constituent Assembly that drew up the constitution of 1891 was a battleground between those seeking to limit executive power, which was dictatorial in scope under President Deodoro da Fonseca, and the Jacobins—radical authoritarians who opposed the coffee oligarchy and wanted to preserve and intensify presidential authority. The constitution established a federation governed supposedly by a president, a bicameral National Congress, and a judiciary. However, real power rested in the hands of regional patrias and local potentates, called “colonels”. There was a constitutional system as well as the real system of unwritten agreements (coronelismo) among the colonels. Under coronelism, local oligarchies chose state governors, who selected the president. This informal but real distribution of power emerged as a result of armed struggles and bargaining. The system consolidated the state oligarchies around families that were members of the old monarchical elite, and to provide a check to the Army, the state oligarchies strengthened the navy and state police. In larger states, state police evolved into small armies. In the final decades of the 19th century, the United States, much of Europe, and neighboring Argentina expanded the right to vote. Brazil, however, moved to restrict access to the polls under the monarchy and did not correct the situation under the republic. By 1910, only 627,000 eligible voters could be counted among a total population of 22 million. Throughout the 1920s, only between 2.3% and 3.4% of the total population could vote. The middle class was far from active in political life. High illiteracy rates went hand in hand with the absence of universal suffrage or a free press. In regions far from major urban centers, news could take four to six weeks to arrive. In this context, a free press created by European immigrant anarchists started to develop during the 1890s and 1900s and spread widely, particularly in large cities. Latifundio Economies Around the start of the 20th century, the vast majority of Brazil’s population lived on plantation communities. Because of the legacy of Ibero-American slavery, abolished as late as 1888 in Brazil, there was an extreme concentration of landownership reminiscent of feudal aristocracies: 464 great landowners held more than 270,000 km² of land (latifúndios), while 464,000 small and medium-sized farms occupied only 157,000 km². Large estate owners used their land to grow export products like coffee, sugar, and cotton, and the communities who resided on his land would participate in the production of these cash crops. For instance, most typical estates included the owner’s chaplain and overseers, indigent peasants, sharecroppers, and indentured servants. As a result, Brazilian producers tended to neglect the needs of domestic consumption, and four-fifths of the country’s grain needs were imported. Brazil’s dependence on factory-made goods and loans from technologically, economically, and politically advanced North Atlantic countries retarded its domestic industrial base. Farm equipment was primitive and largely non-mechanized. Peasants tilled the land with hoes and cleared the soil through the inefficient slash-and-burn method. Meanwhile, living standards were generally squalid. Malnutrition, parasitic diseases, and lack of medical facilities limited the average life span in 1920 to 28 years. Without an open market, Brazilian industry could not compete against the technologically advanced Anglo-American economies. In this context, the Encilhamento (a “boom and bust” process that first intensified, and then crashed, in the years between 1889 and 1891) occurred, the consequences of which were felt in all areas of the Brazilian economy for many decades following. During this period, Brazil did not have a significantly integrated national economy. The absence of a big internal market with overland transportation, except for mule trains, impeded internal economic integration, political cohesion, and military efficiency. Instead, Brazil had a grouping of regional economies that exported their own specialty products to European and North American markets. The Northeast exported its surplus cheap labor but saw its political influence decline in the face of competition from Caribbean sugar producers. The wild rubber boom in Amazônia declined due to the rise of efficient Southeast Asian colonial plantations following 1912. The national-oriented market economies of the South were not dramatic, but their growth was steady, and by the 1920s, that growth allowed Rio Grande do Sul to exercise considerable political leverage. Real power resided in the coffee-growing states of the Southeast—São Paulo, Minas Gerais, and Rio de Janeiro—that produced the most export revenue. Those three and Rio Grande do Sul harvested 60% of Brazil’s crops, turned out 75% of its industrial and meat products, and held 80% of its banking resources. Struggles for Reform Support for industrial protectionism increased during the 1920s. Under considerable pressure from the growing middle class, a more activist, centralized state adapted to represent the new bourgeoisie’s interests. A policy of state intervention, consisting of tax breaks, lowered duties, and import quotas, expanded the domestic capital base. During this time, São Paulo was at the forefront of Brazil’s economic, political, and cultural life. Known colloquially as a “locomotive pulling the 20 empty boxcars” (a reference to the 20 other Brazilian states) and Brazil’s industrial and commercial center to this day, São Paulo led the trend toward industrialization with foreign revenues from the coffee industry. With manufacturing on the rise and the coffee oligarchs imperiled by the growth of trade associated with World War I, the old order of café com leite and coronelismo eventually gave way to the political aspirations of the new urban groups: professionals, government and white-collar workers, merchants, bankers, and industrialists. Prosperity also contributed to a rapid rise in the population of working class Southern and Eastern European immigrants—a population that contributed to the growth of trade unionism, anarchism, and socialism. In the post-World War I period, Brazil was hit by its first wave of general strikes and the establishment of the Communist Party in 1922. However, the overwhelming majority of the Brazilian population was composed of peasants with few if any ties to the growing labor movement. As a result, social reform movements would crop up in the 1920s, ultimately culminating in the Revolution of 1930. Years Under the Military Regime Brazilian society experienced extreme oppression under the military regime despite general economic growth during the Brazilian Miracle. The Brazilian military government was an authoritarian military dictatorship that ruled Brazil from April 1, 1964 to March 15, 1985. It began with the 1964 coup d’etat led by armed forces against the administration of the President Joao Goulart, who had previously served as Vice President and assumed the office of the presidency following the resignation of democratically-elected Janio Quadros. The military revolt was fomented by the governors of Minas Gerais, Sao Paulo, and Guanabara. The coup was supported by the Embassy and State Department of the United States. The fall of President Goulart worried many citizens. Many students, Catholics, Marxists, and workers formed groups that opposed military rule. A minority even engaged in direct armed struggle, although the vast majority of the resistance supported political solutions to the mass suspension of human rights. In the first few months after the coup, thousands of people were detained, and thousands of others were removed from their civil service or university positions. The military dictatorship lasted for almost 21 years despite initial pledges to the contrary. In 1967, it enacted a new, restrictive constitution that stifled freedom of speech and political opposition. The regime adopted nationalism, economic development, and anti-communism as its guidelines. Establishing the Regime Within the Army, agreement could not be reached as to a civilian politician who could lead the government after the ouster of President Joao Goulart. On April 9, 1964, the coup leaders published the First Institutional Act, which greatly limited the freedoms of the 1946 constitution. Under the act, the President was granted authority to remove elected officials from office, dismiss civil servants, and revoke political rights of those found guilty of subversion or misuse of public funds for up to 10 years. Three days after the publication of the act, Congress elected Army Chief of Staff, Marshal Humberto de Alencar Castelo Branco to serve as president for the remainder of Goulart’s term. Castelo Branco had intentions of overseeing radical reforms to the political-economic system, but he refused to remain in power beyond the remainder of Goulart’s term or to institutionalize the military as a governing body. Although he intended to return power to elected officials at the end of Goulart’s term, competing demands radicalized the situation. Military hardliners wanted to completely purge the left-wing and populist influences for the duration of Castelo Branco’s reforms. Civilians with leftist leanings criticized Castelo Branco for the extreme actions he took to implement reforms, whereas the military hardliners felt Castelo Branco was acting too lenient. On October 27, 1965, after two opposition candidates won in two provincial elections, Castelo Branco signed the Second Institutional Act, which set the stage for a purge of Congress, removing objecting state governors and expanding the President’s arbitrary powers at the expense of the legislative and judiciary branches. This not only provided Castelo Branco with the ability to repress the left, but also provided a legal framework for the hard-line authoritarian rules of Artur da Costa e Silva (1967 – 69) and Emilio Garrastazu Medici (1969 – 74). Rule of the Hardliners Castelo Branco was succeeded to the presidency by General Artur da Costa e Silva, a hardliner within the regime. Experimental artists and musicians formed the Tropicalia movement during this time, and some major popular musicians such as Gilberto Gil and Caetano Velsos were either arrested, imprisoned, or exiled. The military government had already been using various forms of torture as early as 1964 in order to gain information as well as intimidate and silence potential opponents. This radically increased after 1968. Widespread student protests also abounded during this period. In response, on December 13, 1968, Costa e Silva signed the Fifth Institutional Act, which gave the president dictatorial powers, dissolved Congress and the state legislatures, suspended the constitution, ended democratic government, suspended habeas corpus, and imposed censorship. On August 31, 1969, Costa e Silva suffered a stroke. Instead of his vice president assuming the office of the presidency, all state power was assumed by the military, which then chose General Emilio Garrastazu Medici, another hardliner, as president. During his presidency, Medici sponsored the greatest human rights abuses of the time period. Persecution and torture of dissidents, harassment against journalists, and press censorship became ubiquitous. A succession of kidnappings of foreign ambassadors in Brazil embarrassed the military government. Reactions, such as anti-government manifestations and guerrilla movements, generated increasing repressive measures in turn. By the end of 1970, the official minimum wage went down to US $40 a month, and as a result, the more than one-third of the Brazilian workforce that made minimum wage lost approximately half their purchasing power in relation to 1960 levels. Nevertheless, Medici was popular because his term was met with the largest economic growth of any Brazilian President, a period of time popularly known as the Brazilian Miracle. The military entrusted economic policy to a group of technocrats led by Minister of Finance Delfim Netto. During these years, Brazil became an urban society with 67% of people living in cities. The government became directly involved in the economy, investing heavily in new highways, bridges, and railroads. Steel mills, petrochemical factories, hydroelectric power plants, and nuclear reactors were also built by large state-owned companies like Eletrobras and Petrobras. To reduce reliance on imported oil, the ethanol industry was heavily promoted. By 1980, 57% of Brazil’s exports were industrial goods compared to 20% in 1968. Additionally, average annual GDP growth was close to 10%. Comparatively, during President Goulart’s rule, the economy had been nearing a crisis, with annual inflation reaching 100%. Additionally, Medici presented the First National Development Plan in 1971, which aimed at increasing the rate of economic growth, particularly in the Northeast and Amazonia. Brazil also won the 1970 Football World Cup, promoting national pride and Brazil’s international profile. Attributions Attributions Title Image Wikimedia Commons. Getuilo Vargas: https://en.wikipedia.org/wiki/Get%C3%BAlio_Vargas#/media/File:Getuliovargas1930.jpg Adapted from: https://www.coursehero.com/study-guides/boundless-worldhistory/mexico/ https://www.coursehero.com/study-guides/boundless-worldhistory/brazil/
oercommons
2025-03-18T00:35:09.380376
Neil Greenwood
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87982/overview", "title": "Statewide Dual Credit World History, The Catastrophe of the Modern Era: 1919-Present CE, Chapter 13: Post WWI, Challenges of Interwar Latin America", "author": "Anna McCollum" }
https://oercommons.org/courseware/lesson/87981/overview
Middle East Between the World Wars Overview Overview In the aftermath of the First World War the Middle East experienced nationalism, decolonization, and religious strife. These peoples challenged the priorities and values of the Allied Powers in their crafting of peace treaties. Those treaties instead of stabilizing the Middle East left uncertainty and continued instability. During the interwar period new nations emerged, each trying to find its place in the diverse complex of ethnic groups and religions. As part of this process of nation building the principle imperial powers, Britain and France, had to negotiate a new path for their imperial interests in a period of accelerating decolonization. Ataturk and Turkish Independence The occupation of the Ottoman Empire by the Allies in the aftermath of World War I prompted the establishment of the Turkish national movement under the leadership of Mustafa Kemal. This led to the Turkish War of Independence, which resulted in the establishment of the Republic of Turkey. Learning Objectives Explain how the social, political, and military costs of World War I fostered geographic and demographic shifts in Europe, Africa, and the Middle East. Key Terms / Key Concepts Mustafa Kemal: a Turkish army officer, revolutionary, and founder of the Republic of Turkey, serving as its first President from 1923 until his death in 1938; instituted a series of political, legal, religious, cultural, social, and economic policy changes that were designed to convert the new Republic of Turkey into a secular, modern nation-state; eventually came to be known as Ataturk Background: Allied Occupation of Ottoman Empire For the Ottoman Empire the fighting of World War I ended on October 30, 1918, with the Armistice of Mudros signed between the Ottoman Empire and the Allies; this brought hostilities in the Middle Eastern theater to a close. This armistice granted the Allies the right to occupy forts controlling the Straits of the Dardanelles and the Bosporus, as well as the right to occupy any territory in case of a threat to security. On November 13, 1918, a French brigade entered the city to begin the Occupation of Constantinople and its immediate dependencies, followed by a fleet consisting of British, French, Italian, and Greek ships deploying soldiers on the ground the next day. A wave of seizures by the Allies took place in the following months. Turkish National Movement The occupation of parts of the old Ottoman empire by the Allies in the aftermath of World War I prompted the establishment of the Turkish National Movement. The Movement was united around the leadership of Mustafa Kemal Atatürk and the authority of the Grand National Assembly set up in Ankara, which pursued the Turkish War of Independence. The Movement supported a progressively defined political ideology generally termed “Kemalism.” Kemalism called for the creation of a republic to represent the electorate, secular administration (laïcité) of that government, Turkish nationalism, a mixed economy with state participation in many sectors (as opposed to state socialism), and other forms of economic, political, social, and technological modernization. Turkish War of Independence Under the leadership of Mustafa Kemal, a military commander who distinguished himself during the 1915 Gallipoli Campaign, the Turkish War of Independence was waged with the aim of revoking the terms of the Treaty of Sèvres. The war began after some parts of Turkey were occupied and partitioned following the Ottoman Empire’s defeat in World War I. The War (May 19, 1919 – July 24, 1923) was fought between the Turkish nationalists and the proxies of the Allies—namely Greece on the Western front, Armenia on the Eastern, and France on the Southern, along with the United Kingdom and Italy in Constantinople (now Istanbul). Few of the present British, French, and Italian troops were deployed or engaged in combat. After a series of battles during the Greco-Turkish war, the Greek army advanced as far as the Sakarya River, just eighty kilometers west of the Turkish Grand National Assembly (GNA). On August 5, 1921, Mustafa Kemal was promoted to commander in chief of the forces by the GNA. The ensuing Battle of Sakarya was fought from August 23 to September 13, 1921, and it ended with the defeat of the Greeks. After this victory, on September 19, 1921, Mustafa Kemal Pasha was given the rank of Mareşal and the title of Gazi by the Grand National Assembly. The Allies, ignoring the extent of Kemal’s successes, hoped to impose a modified version of the Treaty of Sèvres as a peace settlement on Ankara, but the proposal was rejected. In August 1922, Kemal launched an all-out attack on the Greek lines at Afyonkarahisar in the Battle of Dumlupınar, and Turkish forces regained control of Smyrna on September 9, 1922. The next day, Mustafa Kemal sent a telegram to the League of Nations saying that the Turkish population was so worked up that the Ankara Government would not be responsible for massacres. By September 18, 1922, the occupying armies had been expelled, and the Ankara-based Turkish government, which had declared itself the legitimate government of the country on April 23, 1920, proceeded with the process of building the new Turkish nation. On November 1, 1922, the Turkish Parliament in Ankara formally abolished the Sultanate, ending 623 years of monarchical Ottoman rule. The Treaty of Lausanne of July 24, 1923, led to international recognition of the sovereignty of the newly formed “Republic of Turkey” as the successor state of the Ottoman Empire, and the republic was officially proclaimed on October 29, 1923, in Ankara, the country’s new capital. The Lausanne treaty stipulated a population exchange between Greece and Turkey in which 1.1 million Greeks left Turkey for Greece in exchange for 380,000 Muslims transferred from Greece to Turkey. On March 3, 1924, the Ottoman Caliphate was officially abolished and the last Caliph was exiled. Mustafa Kemal Atatürk’s Presidency As president Kemal introduced many radical reforms with the aim of founding a new secular republic from the remnants of the Ottoman empire. For the first 10 years of the new regime, the country saw a steady process of secular Westernization through Atatürk’s reforms, which included education; the discontinuation of religious and other titles; the closure of Islamic courts; the replacement of Islamic canon law with a secular civil code modeled after Switzerland’s and a penal code modeled after Italy’s; recognition of gender equality, including the grant of full political rights for women on December 5, 1934; language reform initiated by the newly founded Turkish Language Association, including replacement of the Ottoman Turkish alphabet with the new Turkish alphabet derived from the Latin alphabet; the law outlawing the fez; and the law on family names, which required that surnames be exclusively hereditary and familial, with no reference to military rank, civilian office, tribal affiliation, race, and/or ethnicity. The British Empire in the Middle East During the partitioning of the Ottoman Empire, the British promised the international Zionist movement their support in recreating the historic Jewish homeland in Palestine via the Balfour Declaration, a move that created much political conflict, which is still present today. Learning Objectives Explain how the social, political, and military costs of World War I fostered geographic and demographic shifts in Europe, Africa, and the Middle East. Key Terms / Key Concepts Zionism: Jewish national revival movement in reaction to anti-Semitic and exclusionary nationalist movements in Europe; emerging during the late nineteenth century, its goal was the establishment of a Jewish homeland in Palestine Balfour Declaration: a letter dated November 1917 from the United Kingdom’s Foreign Secretary Arthur James Balfour to Walter Rothschild, 2nd Baron Rothschild, a leader of the British Jewish community, for transmission to the Zionist Federation of Great Britain and Ireland, pledging British support for a Jewish state British Mandate for Palestine: a geopolitical entity under British administration, carved out of Ottoman Southern Syria after World War I (British civil administration in Palestine operated from 1920 until 1948.) During World War I, continued Arab disquiet over Allied intentions led in 1918 to the British “Declaration to the Seven” and the “Anglo-French Declaration,” the latter promising “the complete and final liberation of the peoples who have for so long been oppressed by the Turks, and the setting up of national governments and administrations deriving their authority from the free exercise of the initiative and choice of the indigenous populations.” The British were awarded three mandated territories by the League of Nations after WWI: Palestine, Mesopotamia (later Iraq), and control of the coastal strip between the Mediterranean Sea and the River Jordan. Faisal was installed as King of Iraq; he was a son of Sharif Hussein (who helped lead the Arab Revolt against the Ottoman Empire). Transjordan provided a throne for another of Hussein’s sons, : Abdullah. Mandatory Palestine was placed under direct British administration, and the Jewish population was allowed to increase, initially under British protection. Most of the Arabian Peninsula fell to another British ally, Ibn Saud, who created the Kingdom of Saudi Arabia in 1932. The British Empire and Palestine British support for an increased Jewish presence in Palestine was primarily geopolitical, though idealistically embedded in 19th-century evangelical Christian feelings that the country should play a role in Christ’s Second Coming. Early British political support was precipitated in the 1830s and 1840s, as a result of the Eastern Crisis after Muhammad Ali occupied Syria and Palestine. Though these calculations had lapsed as the attempts of Theodor Herzl, the founder of Zionism, to obtain international support for his project failed, WWI led to renewed strategic assessments and political bargaining regarding the Middle and Far East. Zionism is Jewish national revival movement that emerged during the late nineteenth century in reaction to anti-Semitic and exclusionary nationalist movements in Europe at that time. Its goal was the establishment of a Jewish homeland in the territory defined as the historic Land of Israel, roughly corresponding to Palestine, Canaan, or the Holy Land. Soon after this, most leaders of the movement associated the main goal with creating the desired state in Palestine, then controlled by the Ottoman Empire. Zionism was first discussed at the British Cabinet level on November 9, 1914, four days after Britain’s declaration of war on the Ottoman Empire. David Lloyd George, then Chancellor of the Exchequer, discussed the future of Palestine. After the meeting Lloyd George assured Herbert Samuel—fellow Zionist and President of the Local Government Board—that “he was very keen to see a Jewish state established in Palestine.” George spoke of Zionist aspirations for a Jewish state in Palestine and of Palestine’s geographical importance to the British Empire. Samuel wrote in his memoirs: “I mentioned that two things would be essential—that the state should be neutralized, since it could not be large enough to defend itself, and that the free access of Christian pilgrims should be guaranteed…. I also said it would be a great advantage if the remainder of Syria were annexed by France, as it would be far better for the state to have a European power as neighbour than the Turk.” James Balfour of the Balfour Declaration, explaining the historic significance and context of Zionism, declared that: “The four Great Powers are committed to Zionism. And Zionism, be it right or wrong, good or bad, is rooted in age-long traditions, in present needs, in future hopes, of far profounder import than the desires and prejudices of the 700,000 Arabs who now inhabit that ancient land.” Through British intelligence officer T. E. Lawrence (aka: Lawrence of Arabia), Britain supported the establishment of a united Arab state covering a large area of the Arab Middle East in exchange for Arab support of the British during the war. Thus, the United Kingdom agreed in the McMahon–Hussein Correspondence that it would honor Arab independence if they revolted against the Ottomans, but the two sides had different interpretations of this agreement. In the end the UK and France divided up the area under the Sykes-Picot Agreement, an act of betrayal in the eyes of the Arabs. Further confusing the issue was the Balfour Declaration of 1917, promising British support for a Jewish “national home” in Palestine. At the war’s end the British and French set up a joint “Occupied Enemy Territory Administration” in what had been Ottoman Syria. The British achieved legitimacy for their continued control by obtaining a mandate from the League of Nations in June 1922. The formal objective of the League of Nations Mandate system was to administer parts of the defunct Ottoman Empire, which had been in control of the Middle East since the 16th century, “until such time as they are able to stand alone.” The civil Mandate administration was formalized with the League of Nations’ consent in 1923 under the British Mandate for Palestine, which covered two administrative areas. As the Second World War approached, the British empire was invested in the separate and, at points, agendas of nation building in the Middle East among the various peoples therein. The French Empire in the Middle East After World War I, Syria and Lebanon became a French protectorate under the League of Nations Mandate System, a move that was met immediately with armed resistance from Arab nationalists. The French government, like the British government, was trying to use the mandate system to maintain an imperial presence in the Middle East. The French government encountered the same kinds of challenges from proponents of decolonization and nationalism as the British government. These forces for decolonization and nationalism were part of the larger stream of these movements across Africa, Asia, and in different ways, the Americas. Learning Objectives Explain how the social, political, and military costs of World War I fostered geographic and demographic shifts in Europe, Africa, and the Middle East. Key Terms / Key Concepts League of Nations: an intergovernmental organization founded on January 10, 1920, as a result of the Paris Peace Conference that ended the First World War; the first international organization whose principal mission was to maintain world peace. Its primary goals as stated in its Covenant included preventing wars through collective security and disarmament and settling international disputes through negotiation and arbitration. French Mandate for Syria and the Lebanon Officially, the Mandate for Syria and the Lebanon (1923 − 1946), was a League of Nations mandate founded after the First World War, which was meant to partition the Ottoman Empire, especially Syria and Lebanon. The Mandate system was considered the antithesis to colonialism, with the governing country acting as a trustee until the inhabitants were able to stand on their own, at which point the Mandate would terminate and an independent state would be born. When first arriving in Lebanon, the French were received as liberators by the Christian community, but as they entered Syria, they were faced with a strong resistance. In response, the mandate region was subdivided into six states: Damascus (1920), Aleppo (1920), Alawites (1920), Jabal Druze (1921), the autonomous Sanjak of Alexandretta (1921, modern-day Hatay), and the State of Greater Lebanon (1920), which became later the modern country of Lebanon. The drawing of those states was based in part on the sectarian makeup of Syria. However, nearly all the Syrian sects were hostile to the French mandate and the division it created, and there were numerous revolts in all of the Syrian states. Maronite Christians of Mount Lebanon, on the other hand, were a community with a dream of independence that was realized under the French. Greater Lebanon was the exception among the other newly formed states, in that its Christian citizens were not hostile to the French Mandate. Although there were uprisings in the respective states, the French purposefully gave different ethnic and religious groups in the Levant their own lands in the hopes of prolonging their rule. During this time of world decolonization, the French hoped to focus on fragmenting the various groups in the region, so the local population would not focus on a larger nationalist movement to dispose of colonial rule. In addition, administration of colonial governments was heavily dominated by the French. Local authorities were given very little power and did not have the authority to independently decide policy. The small amount of power that local leaders had could easily be overruled by French officials. The French did everything possible to prevent people in the Levant from developing self-sufficient governing bodies. For instance, in 1930 France extended its constitution on to Syria. Rise in Conflict With the defeat of Ottomans in Syria, British troops under General Sir Edmund Allenby entered Damascus in 1918 accompanied by troops of the Arab Revolt led by Faisal, son of Sharif Hussein of Mecca. The new Arab administration formed local governments in the major Syrian cities, and the pan-Arab flag was raised all over Syria. The Arabs hoped, with faith in earlier British promises, that the new state would include all the Arab lands stretching from Aleppo in northern Syria to Aden in southern Yemen. However, in accordance with the secret Sykes-Picot Agreement between Britain and France, General Allenby assigned the Arab administration only the interior regions of Syria (the eastern zone). On October 8, French troops disembarked in Beirut and occupied the Lebanese coastal region south to Naqoura (the western zone), replacing British troops there. The French immediately dissolved the local Arab governments in the region. France demanded full implementation of the Sykes-Picot Agreement, with Syria under its control. On November 26, 1919, British forces withdrew from Damascus to avoid confrontation, leaving the Arab government to face France. Unrest erupted in Syria when Faisal accepted a compromise with French Prime Minister Clemenceau and Zionist leader Chaim Weizmann over Jewish immigration to Palestine. Anti-Hashemite manifestations broke out and Muslim inhabitants in and around Mount Lebanon revolted with fear of being incorporated into a new, mainly Christian state of Greater Lebanon, as part of France’s claim to these territories in the Levant was that France was a protector of the minority Christian communities. On April 25, 1920, the supreme inter-Allied council, that was formulating the Treaty of Sèvres, granted France the mandate of Syria (including Lebanon), and granted Britain the Mandate of Palestine (including Jordan) and Iraq. Syrians reacted with violent demonstrations, and a new government headed by Ali Rida al-Rikabi was formed on May 9, 1920. The new government decided to organize general conscription and began forming an army. On July 14, 1920, General Gouraud issued an ultimatum to Faisal, giving him the choice between submission or abdication. Realizing that the power balance was not in his favor, Faisal chose to cooperate. However, the young minister of war, Youssef al-Azmeh, refused to comply. In the resulting Franco-Syrian War, Syrian troops under al-Azmeh met French forces under General Mariano Goybet at the Battle of Maysaloun. The French won the battle in less than a day. Azmeh died on the battlefield along with many of the Syrian troops. Goybet entered Damascus on July 24, 1920. End of the Mandate With the fall of France in 1940 during World War II, Syria came under the control of the Vichy Government until the British and Free French invaded and occupied the country in July 1941. Syria proclaimed its independence again in 1941, but it wasn’t until January 1, 1944, that it was recognized as an independent republic. On September 27, 1941, France proclaimed, by virtue of and within the framework of the Mandate, the independence and sovereignty of the Syrian State. The proclamation said “the independence and sovereignty of Syria and Lebanon will not affect the juridical situation as it results from the Mandate Act.” There were protests in 1945 over the slow French withdrawal; the French responded to these protests with artillery. In an effort to stop the movement toward independence, French troops occupied the Syrian parliament in May 1945 and cut off Damascus’s electricity. Training their guns on Damascus’s old city, the French killed 400 Syrians and destroyed hundreds of homes. Continuing pressure from Syrian nationalist groups and the British forced the French to evacuate the last of its troops in April 1946, leaving the country in the hands of a republican government that was formed during the mandate. Although rapid economic development followed the declaration of independence, Syrian politics from independence through the late 1960s were marked by upheaval. The early years of independence were marked by political instability. The Partitioning of Palestine The UN Partition Plan for Palestine was a proposal by the United Nations that recommended a partition of Mandatory Palestine into independent Arab and Jewish States. It was rejected by the Palestinians, leading to a civil war and the end of the British Mandate. Learning Objectives Explain how the social, political, and military costs of World War I fostered geographic and demographic shifts in Europe, Africa, and the Middle East. Key Terms / Key Concepts League of Nations: an intergovernmental organization founded on January 10, 1920, as a result of the Paris Peace Conference that ended the First World War; the first international organization whose principal mission was to maintain world peace. Its primary goals as stated in its Covenant included preventing wars through collective security and disarmament and settling international disputes through negotiation and arbitration. British Mandate for Palestine: a geopolitical entity under British administration, carved out of Ottoman Southern Syria after World War I (British civil administration in Palestine operated from 1920 until 1948.) Background and Early Proposals for Partition The League of Nations formalized British administration of Palestine as the Palestine Mandate in 1923. This mandate was part of the Partitioning of the Ottoman Empire following World War I. The British Mandate in Palestine reaffirmed the 1917 British commitment to the Balfour Declaration for the establishment in Palestine of a “National Home” for the Jewish people, with the prerogative to carry it out. A 1918 British census estimated that 700,000 Arabs and 56,000 Jews lived in Palestine. During the Interwar period it became clear that the different groups in Palestine would not live in harmony. In 1937, following a six-month Arab General Strike and armed insurrection that aimed to pursue national independence, the British established the Peel Commission. The Jewish population had been attacked throughout the region during the Arab revolt, leading to the idea that the two populations could not be reconciled. The Commission concluded that the British Palestine Mandate had become unworkable, and recommended Partition into an Arab state linked to Transjordan, a small Jewish state, and a mandatory zone. To address problems arising from the presence of national minorities in each area, the Commission suggested a partition—a land and population transfer involving the transfer of some 225,000 Arabs living in the envisaged Jewish state and 1,250 Jews living in a future Arab state, a measure deemed compulsory “in the last resort.” The Palestinian Arab leadership rejected partition as unacceptable, given the inequality in the proposed population exchange and the transfer of one-third of Palestine, including most of its best agricultural land, to recent immigrants. However, the Jewish leaders—Chaim Weizmann and David Ben-Gurion—persuaded the Zionist Congress to lend provisional approval to the Peel recommendations as a basis for further negotiations. In a letter to his son in October 1937, Ben-Gurion explained that partition would be a first step to “possession of the land as a whole.” The British Woodhead Commission was set up to examine the practicality of partition. The Peel plan was rejected, and two possible alternatives were considered. In 1938 the British government issued a policy statement declaring that “the political, administrative and financial difficulties involved in the proposal to create independent Arab and Jewish States inside Palestine are so great that this solution of the problem is impracticable.” Representatives of Arabs and Jews were invited to London for the St. James Conference, which proved unsuccessful. MacDonald White Paper of May 1939 declared that it was “not part of [the British government’s] policy that Palestine should become a Jewish State,” and sought to limit Jewish immigration to Palestine and restricted Arab land sales to Jews. However, the League of Nations commission held that the White Paper was in conflict with the terms of the Mandate as put forth in the past. The outbreak of the Second World War suspended any further deliberations. The Jewish Agency hoped to persuade the British to restore Jewish immigration rights and cooperated with the British in the war against fascism. Aliyah Bet was organized to spirit Jews out of Nazi-controlled Europe despite British prohibitions. The White Paper also led to the formation of Lehi, a small Jewish organization that opposed the British. After World War II, in August 1945 President Truman asked for the admission of 100,000 Holocaust survivors into Palestine, but the British maintained limits on Jewish immigration in line with the 1939 White Paper. The Jewish community rejected the restriction on immigration and organized an armed resistance. These actions and United States pressure to end the anti-immigration policy led to the establishment of the Anglo-American Committee of Inquiry. In April 1946, the Committee reached a unanimous decision for the immediate admission of 100,000 Jewish refugees from Europe into Palestine, a repeal of the White Paper restrictions of land sale to Jews, that the country be neither Arab nor Jewish, and the extension of U.N. Trusteeship. U.S. endorsed the Commission findings concerning Jewish immigration and land purchase restrictions, while the U.K. conditioned its implementation on U.S. assistance in case of another Arab revolt. In effect, the British continued to carry out White Paper policy. And the recommendations triggered violent demonstrations in the Arab states and calls for a Jihad and an annihilation of all European Jews in Palestine. Saudi Arabia Saudi Arabia, officially known as the Kingdom of Saudi Arabia, is an Arab state in Western Asia constituting the bulk of the Arabian Peninsula. The area of modern-day Saudi Arabia formerly consisted of four distinct regions: Hejaz, Najd, and parts of Eastern Arabia (Al-Ahsa), and Southern Arabia (‘Asir). The Kingdom of Saudi Arabia was founded in 1932 by Ibn Saud. He united the four regions into a single state through a series of conquests beginning in 1902 with the capture of Riyadh, the ancestral home of his family, the House of Saud. Saudi Arabia has since been an absolute monarchy, effectively a hereditary dictatorship governed along Islamic lines. The ultraconservative Wahhabi religious movement within Sunni Islam has been called “the predominant feature of Saudi culture,” with its global spread largely financed by the oil and gas trade. Saudi Arabia is sometimes called “the Land of the Two Holy Mosques” in reference to Al-Masjid al-Haram (in Mecca) and Al-Masjid an-Nabawi (in Medina), the two holiest places in Islam. Learning Objectives Explain how the social, political, and military costs of World War I fostered geographic and demographic shifts in Europe, Africa, and the Middle East. The new kingdom was one of the poorest countries in the world, reliant on limited agriculture and pilgrimage revenues. In 1938, vast reserves of oil were discovered in the Al-Ahsa region along the coast of the Persian Gulf, and full-scale development of the oil fields began in 1941 under the U.S.-controlled Aramco (Arabian American Oil Company). Oil provided Saudi Arabia with economic prosperity and substantial political leverage internationally. Saudi Arabia has since become the world’s largest oil producer and exporter, controlling the world’s second largest oil reserves and the sixth largest gas reserves. The kingdom is categorized as a World Bank high-income economy with a high Human Development Index, and it is the only Arab country to be part of the G-20 major economies. However, the economy of Saudi Arabia is the least diversified in the Gulf Cooperation Council, lacking any significant service or production sector (apart from the extraction of resources). The country has attracted criticism for its restrictions on women’s rights and usage of capital punishment. After the Great Arab Revolt against the Ottomans in 1916 during World War I, the Ottoman Empire was partitioned by Britain and France. The Emirate of Transjordan was established in 1921 by then Emir Abdullah I and became a British protectorate. In 1946, Jordan became an independent state officially known as The Hashemite Kingdom of Transjordan. Jordan captured the West Bank during the 1948 Arab–Israeli War and the name of the state was changed to The Hashemite Kingdom of Jordan in 1949. Jordan is a founding member of the Arab League and the Organisation of Islamic Cooperation, and is one of two Arab states to have signed a peace treaty with Israel. The country is a constitutional monarchy, but the king holds wide executive and legislative powers. The roots of the instability and violence in the Middle East go back to the settlements after the First World War. Conflicting agendas produced compromises unacceptable to many in the interested parties. Attributions Images courtesy of Wikimedia Commons Title Image - photo of Turkish troops entering Istanbul 6 October 1923 Attribution: Unknown author, Public domain, via Wikimedia Commons Provided by: Wikipedia Location: https://commons.wikimedia.org/wiki/File:Liberation_of_Istanbul_on_October_6,_1923.jpg License: CC BY-SA: Attribution-ShareAlike Boundless World History "Partition of the Ottoman Empire" Adapted from https://courses.lumenlearning.com/boundless-worldhistory/chapter/partition-of-the-ottoman-empire/ CC LICENSED CONTENT, SHARED PREVIOUSLY Curation and Revision. Provided by: Boundless.com. License: CC BY-SA: Attribution-ShareAlike CC LICENSED CONTENT, SPECIFIC ATTRIBUTION - History of the Ottoman Empire. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/History_of_the_Ottoman_Empire. License: CC BY-SA: Attribution-ShareAlike Decline and modernization of the Ottoman Empire. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Decline_and_modernization_of_the_Ottoman_Empire. License: CC BY-SA: Attribution-ShareAlike Ottoman Empire. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Ottoman_Empire. License: CC BY-SA: Attribution-ShareAlike Defeat and dissolution of the Ottoman Empire. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Defeat_and_dissolution_of_the_Ottoman_Empire. License: CC BY-SA: Attribution-ShareAlike Sultanvahideddin.jpg. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Ottoman_Empire#/media/File:Sultanvahideddin.jpg. License: CC BY-SA: Attribution-ShareAlike Ottoman Empire. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Ottoman_Empire. License: CC BY-SA: Attribution-ShareAlike Sultanvahideddin.jpg. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Ottoman_Empire#/media/File:Sultanvahideddin.jpg. License: CC BY-SA: Attribution-ShareAlike Turkish National Movement. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Turkish_National_Movement. License: CC BY-SA: Attribution-ShareAlike Turkish War of Independence. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Turkish_War_of_Independence. License: CC BY-SA: Attribution-ShareAlike Mustafa Kemal Ataturk. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Mustafa_Kemal_Ataturk. License: CC BY-SA: Attribution-ShareAlike Turkey. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Turkey. License: CC BY-SA: Attribution-ShareAlike History of the Republic of Turkey. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/History_of_the_Republic_of_Turkey. License: CC BY-SA: Attribution-ShareAlike Sultanvahideddin.jpg. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Ottoman_Empire#/media/File:Sultanvahideddin.jpg. License: CC BY-SA: Attribution-ShareAlike Satirical_map_of_Europe,_1877.jpg. Provided by: Wikipedia. Located at: https://upload.wikimedia.org/wikipedia/commons/1/18/Satirical_map_of_Europe%2C_1877.jpg. License: CC BY-SA: Attribution-ShareAlike Tu00fcrk_Kurtuluu015f_Savau015fu0131_-_kolaj.jpg. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Turkish_War_of_Independence#/media/File:Turk_Kurtulus_Savasi_-_kolaj.jpg. License: CC BY-SA: Attribution-ShareAlike History of the Ottoman Empire. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/History_of_the_Ottoman_Empire. License: CC BY-SA: Attribution-ShareAlike Sultanvahideddin.jpg. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Ottoman_Empire#/media/File:Sultanvahideddin.jpg. License: CC BY-SA: Attribution-ShareAlike Tu00fcrk_Kurtuluu015f_Savau015fu0131_-_kolaj.jpg. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Turkish_War_of_Independence#/media/File:Turk_Kurtulus_Savasi_-_kolaj.jpg. License: CC BY-SA: Attribution-ShareAlike Morgenthau336.jpg. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Armenian_Genocide#/media/File:Morgenthau336.jpg. License: CC BY-SA: Attribution-ShareAlike History of the foreign relations of the United Kingdom. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/History_of_the_foreign_relations_of_the_United_Kingdom. License: CC BY-SA: Attribution-ShareAlike Partitioning of the Ottoman Empire. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Partitioning_of_the_Ottoman_Empire. License: CC BY-SA: Attribution-ShareAlike Mandatory Palestine. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Mandatory_Palestine. License: CC BY-SA: Attribution-ShareAlike Balfour Declaration. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Balfour_Declaration. License: CC BY-SA: Attribution-ShareAlike Paris Peace Conference, 1919. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Paris_Peace_Conference,_1919. License: CC BY-SA: Attribution-ShareAlike MPK1-426_Sykes_Picot_Agreement_Map_signed_8_May_1916.jpg. Provided by: Wikipedia. Located at: https://upload.wikimedia.org/wikipedia/commons/f/f9/MPK1-426_Sykes_Picot_Agreement_Map_signed_8_May_1916.jpg. License: CC BY-SA: Attribution-ShareAlike A_world_in_perplexity_(1918)_(14780310121).jpg. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Mandatory_Palestine#/media/File:A_world_in_perplexity_(1918)_(14780310121).jpg. License: CC BY-SA: Attribution-ShareAlike French Mandate for Syria and the Lebanon. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/French_Mandate_for_Syria_and_the_Lebanon. License: CC BY-SA: Attribution-ShareAlike History of Syria. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/History_of_Syria. License: CC BY-SA: Attribution-ShareAlike Partitioning of the Ottoman Empire. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Partitioning_of_the_Ottoman_Empire. License: CC BY-SA: Attribution-ShareAlike MPK1-426_Sykes_Picot_Agreement_Map_signed_8_May_1916.jpg. Provided by: Wikipedia. Located at: https://upload.wikimedia.org/wikipedia/commons/f/f9/MPK1-426_Sykes_Picot_Agreement_Map_signed_8_May_1916.jpg. License: CC BY-SA: Attribution-ShareAlike A_world_in_perplexity_(1918)_(14780310121).jpg. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Mandatory_Palestine#/media/File:A_world_in_perplexity_(1918)_(14780310121).jpg. License: CC BY-SA: Attribution-ShareAlike 440px-French_Mandate_for_Syria_and_the_Lebanon_map_en.svg.png. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/French_Mandate_for_Syria_and_the_Lebanon#/media/File:French_Mandate_for_Syria_and_the_Lebanon_map_en.svg. License: CC BY-SA: Attribution-ShareAlike Anglo-Persian Oil Company. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Anglo-Persian_Oil_Company. License: CC BY-SA: Attribution-ShareAlike Red Line Agreement. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Red_Line_Agreement. License: CC BY-SA: Attribution-ShareAlike Resource curse. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Resource_curse. License: CC BY-SA: Attribution-ShareAlike MPK1-426_Sykes_Picot_Agreement_Map_signed_8_May_1916.jpg. Provided by: Wikipedia. Located at: https://upload.wikimedia.org/wikipedia/commons/f/f9/MPK1-426_Sykes_Picot_Agreement_Map_signed_8_May_1916.jpg. License: CC BY-SA: Attribution-ShareAlike A_world_in_perplexity_(1918)_(14780310121).jpg. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Mandatory_Palestine#/media/File:A_world_in_perplexity_(1918)_(14780310121).jpg. License: CC BY-SA: Attribution-ShareAlike 440px-French_Mandate_for_Syria_and_the_Lebanon_map_en.svg.png. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/French_Mandate_for_Syria_and_the_Lebanon#/media/File:French_Mandate_for_Syria_and_the_Lebanon_map_en.svg. License: CC BY-SA: Attribution-ShareAlike Mandatory Palestine. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Mandatory_Palestine. License: CC BY-SA: Attribution-ShareAlike
oercommons
2025-03-18T00:35:09.429618
Neil Greenwood
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87981/overview", "title": "Statewide Dual Credit World History, The Catastrophe of the Modern Era: 1919-Present CE, Chapter 13: Post WWI, Middle East Between the World Wars", "author": "Anna McCollum" }
https://oercommons.org/courseware/lesson/88056/overview
Neutral Nations in World War II Overview Choosing Neutrality: Spain, Sweden, Switzerland Although most European countries chose to support either the Allies, or the Axis Powers in World War II, a handful remained neutral for various reasons. Often those reasons were economic and political. In addition to the few European nations, most Latin American countries also countries also chose neutrality in World War II. Learning Objectives - Identify the nations which were neutral durling all or part of World War II, explain the reasons for the neutrality of each, outline the course of the neutrality of each, and assess the historic impact and significance of the neutrality of each. Key Terms / Key Concepts Francisco Franco: a Spanish general who ruled over Spain as a dictator for 36 years from 1939 until his death (He took control of Spain from the government of the Second Spanish Republic after winning the Civil War, and was in power until 1978, when the Spanish Constitution of 1978 went into effect.) Thirty Years War: a series of wars in Central Europe between 1618 and 1648, growing out of the Protestant Reformation NATO: an intergovernmental military alliance signed on April 4, 1949 and including the five Treaty of Brussels states (Belgium, the Netherlands, Luxembourg, France, and the United Kingdom) plus the United States, Canada, Portugal, Italy, Norway, Denmark, and Iceland Warsaw Pact: a collective defense treaty among the Soviet Union and seven other Soviet satellite states in Central and Eastern Europe during the Cold War Spain Although Spain was under the fascist government of General Francisco Franco, it remained neutral during the Second World War. Neither the Allied nor the Axis Powers in the European Theater relished the prospects of opening another front in order to force Spain into action. Moreover, after the Spanish Civil War Franco’s fascist government was in no position to participate in the war as a belligerent. At the beginning of World War II, Franco had considered joining the Axis Powers, but his demands for an alliance with Germany proved too much for Hitler. Franco favored Hitler’s and Mussolini’s governments ideologically and believed that Italy and Germany that would protect Spain. Through 1943 the Allies treated Franco’s government delicately. The Allies provided Spain with the food and raw materials needed to keep its economy running. In return, Franco’s government did not threaten British access to Gibraltar on the southern tip of Spain. British possession of Gibraltar allowed the Allies to maintain control over the Mediterranean Sea and win the Battle of the Atlantic against German U-boats. Both were necessary for Allied victory in the European Theater. Sweden Geography, iron ore deposits, and the imperatives of the Allied Powers and Germany were the reasons for Swedish neutrality. Ideologically Sweden supported the Allies, but with the German conquest of Denmark and Norway in the spring of 1940; and because of its own small military at that time, Sweden had to accept neutrality and even provide Germany with iron ore. As the Allied war effort progressed against Germany after 1944 and as the Swedish military grew more powerful, the Swedish government acted more assertively in dealing with a weakening Germany. This included denying German military demands in the last year of the war. After WWII, Sweden maintained its neutral and non-aligned orientation in the Cold War. Switzerland Swiss neutrality was guaranteed in part by its mountainous geography, which served to partially isolate it from its neighbors. Switzerland had been neutral in the First World War and had a tradition of neutrality in European wars going back to the Thirty Years War in the seventeenth century. In addition, Switzerland had a small but effective military, which would have made conquest by either side costly. Despite these advantages Swiss leaders feared a possible German invasion throughout the war Both sides tolerated Switzerland as a venue for covert intelligence operations and secure banking transactions. Throughout the war refugees streamed into Switzerland, including Jews escaping Hitler’s genocide, members of the French resistance to Hitler’s occupation of France, and various groups of partisans from Italy. After the war Switzerland continued its policy of neutrality in the Cold War between NATO and the Soviet-led Warsaw Pact alliance. Attributions Images courtesy of Wikimedia Commons Title Image - map of Allied, Axis, and neutral nations during World War II. Attribution: Yonghokim, Joaopais + Various (See below.), CC BY-SA 3.0 <http://creativecommons.org/licenses/by-sa/3.0/>, via Wikimedia Commons. Provided by: Wikipedia Commons. Location:https://commons.wikimedia.org/wiki/File:Map_of_participants_in_World_War_II.png .License: Creative Commons Attribution-Share Alike 3.0 Unported Wikipedia "Neutral powers during World War II" Adapted from https://en.wikipedia.org/wiki/Neutral_powers_during_World_War_II CC LICENSED CONTENT, SHARED PREVIOUSLY Curation and Revision. Provided by: Wikipedia.com. License: Creative Commons Attribution-ShareAlike License 3.0 CC LICENSED CONTENT, SPECIFIC ATTRIBUTION - Estonian Neutrality Law of December 1st, 1938 - ^ Neiburgs, Uldis. "Soviet occupation". Latvijas Okupācijas muzejs. Retrieved 17 December 2017. - ^ Liekis, Šarūnas (2010). 1939: The Year that Changed Everything in Lithuania's History. New York: Rodopi. pp. 119–122. ISBN 978-9042027626. - ^ Egido León, Ángeles (2005). "Franco y la Segunda Guerra Mundial". Ayer. 57 (1): 105. JSTOR 41325295. - ^ Egido León 2005, p. 116. - ^ Egido León 2005, p. 122. - ^ Moradiellos, Enrique (2016). "España y la segunda guerra mundial, 1939-1945: entre resignaciones neutralistas y tentaciones beligerantes" (PDF). In Carlos Navajas Zubeldia & Diego Iturriaga Barco (ed.). Siglo. Actas del V Congreso Internacional de Historia de Nuestro Tiempo. Logroño: Universidad de la Rioja. pp. 72–73. - ^ Did Swedish Ball Bearings Keep the Second World War Going? Re‐evaluating Neutral Sweden’s Role - ^ Jan Romein (1962). The Asian Century: A History of Modern Nationalism in Asia. University of California Press. p. 382. - ^ "Inside Tibet". National Archives and Records Administration via Youtube. 1943. Archived from the original on 15 December 2021. Retrieved 12 July 2010. - ^ Allied Relations and Negotiations With Turkey, US State Department, pp. 6-8 - ^ Allén Lascano, Luís C. (1977). Argentina y la gran guerra, Cuaderno 12. «La Soberanía», Todo es Historia, Buenos Aires - ^ Jump up to:a b Carlos Escudé: Un enigma: la "irracionalidad" argentina frente a la Segunda Guerra Mundial, Estudios Interdisciplinarios de América Latina y el Caribe, Vol. 6 Nº 2, jul-dic 1995, Universidad de Tel Aviv - ^ Jump up to:a b c d e Galasso, Norberto (2006). Perón: Formación, ascenso y caída (1893-1955), Colihue, ISBN 950-581-399-6 - ^ "Wings of Thunder – Wartime RAF Veterans Flying in From Argentina". PR Newswire. 6 April 2005. Retrieved 8 January 2008. - ^ Golson, Eric (2016). "Neutrality in War". Economic History of Warfare and State Formation. Studies in Economic History. Springer, Singapore. pp. 259–278. doi:10.1007/978-981-10-1605-9_11. ISBN 9789811016042.
oercommons
2025-03-18T00:35:09.456582
Neil Greenwood
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/88056/overview", "title": "Statewide Dual Credit World History, The Catastrophe of the Modern Era: 1919-Present CE, Chapter 14: The World Afire: World War II, Neutral Nations in World War II", "author": "Anna McCollum" }
https://oercommons.org/courseware/lesson/88116/overview
Contemporary China and India Overview Contemporary China In the 21st century, two nations alone, China and India, account for just over one-third of the world’s population. In this century, both nations have become economic powerhouses that have asserted what they each believe to be their national interest: China’s domination over East Asia and India’s embrace of its own Hindu traditions. Learning Objectives - Examine China’s policies regarding the South China Sea. - Analyze the rise of Hindu nationalism and Neoliberal policies in contemporary India. Key Terms / Key Concepts Association of Southeast Asian Nations: a regional organization comprising ten Southeast Asian states, which promotes intergovernmental cooperation and facilitates economic integration amongst its members exclusive economic zone: a sea zone prescribed by the United Nations Convention on the Law of the Sea, over which a state has special rights regarding the exploration and use of marine resources, including energy production from water and wind nine-dash line: a term that refers to the demarcation line used initially by the government of the Republic of China (ROC/Taiwan) and subsequently also by the government of the People’s Republic of China (PRC), for their claims of the major part of the South China Sea Philippines v. China: an arbitration case brought by the Republic of the Philippines against the People’s Republic of China under Annex VII to the United Nations Convention on the Law of the Sea (UNCLOS) concerning certain issues in the South China Sea, including the legality of China’s “nine-dash line” claim United Nations Convention on the Law of the Sea: the international agreement that defines the rights and responsibilities of nations with respect to their use of the world’s oceans, establishing guidelines for businesses, the environment, and the management of marine natural resources (It was concluded in 1982.) Tension in the South China Sea One-third of the world’s shipping sails through the South China Sea, and it is believed to hold huge oil and gas reserves beneath its seabed. The South China Sea is a marginal sea that is part of the Pacific Ocean, encompassing an area from the Karimata and Malacca Straits to the Strait of Taiwan (around 3.5 million sq km or 1.4 million sq mi). The sea is located south of China, east of Vietnam and Cambodia, northwest of the Philippines, east of the Malay peninsula and Sumatra, up to the Strait of Malacca in the west, and north of the Bangka–Belitung Islands and Borneo. Several countries have made competing territorial claims over the South China Sea, and these territorial disputes are Asia’s most potentially dangerous source of conflict. Both the People’s Republic of China (PRC) and the Republic of China (ROC, commonly known as Taiwan) claim almost the entire body as their own, demarcating their claims within what is known as the nine-dash line. The area overlaps the exclusive economic zone (EEZ) claims of Brunei, Indonesia, Malaysia, the Philippines, Taiwan, and Vietnam. Indonesia, China, and Taiwan all lay claim over waters northeast of the Natuna Islands. Vietnam, China, and Taiwan all lay claim over waters west of the Spratly Islands, some of which are also disputed between Vietnam, China, Taiwan, Brunei, Malaysia, and the Philippines. The Paracel Islands are disputed between China, Taiwan, and Vietnam. Malaysia, Cambodia, Thailand, and Vietnam lay claim over areas in the Gulf of Thailand. Singapore and Malaysia claim waters along the Strait of Johore and the Strait of Singapore. The disputes include the islands, reefs, banks, and other features of the South China Sea, including the Spratly Islands, Paracel Islands, and various boundaries in the Gulf of Tonkin. There are further disputes, including the waters near the Indonesian Natuna Islands, which many do not regard as part of the South China Sea. The states with these conflicting claims are interested in retaining or acquiring the rights to fishing areas, the exploration and potential exploitation of crude oil and natural gas in the seabed of various parts of the South China Sea, and the strategic control of important shipping lanes. Importance of the South China Sea The area of the South China Sea may be rich in oil and natural gas deposits although estimates vary from 7.5 billion to 125 billion barrels of oil and from 190 trillion cubic feet to 500 trillion cubic feet of natural gas. The once abundant fishing opportunities within the region are another motivation for claims. China believes that the value in fishing and oil from the sea may be as much as a trillion dollars. According to studies made by the Department of Environment and Natural Resources (Philippines), this body of water holds one-third of the entire world’s marine biodiversity, making it a very important area for the ecosystem. However, the fish stocks in the area are depleted and countries are using fishing bans to assert their sovereignty claims. Finally, the area is one of the busiest shipping routes in the world. In the 1980s, at least 270 merchant ships used the route each day. Currently, more than half the tonnage of oil transported by sea passes through the South China Sea, a figure rising steadily with the growth of the Chinese consumption of oil. This traffic is three times greater than that passing through the Suez Canal and five times more than the Panama Canal. Disputes China and Vietnam have both been vigorous in prosecuting their claims to this region. China and South Vietnam each controlled part of the Paracel Islands before 1974. A brief conflict in 1974 resulted in 18 Chinese and 53 Vietnamese deaths and China has controlled the whole of Paracel since then. The Spratly Islands have been the site of a naval clash, in which over 70 Vietnamese sailors were killed in 1988. Disputing claimants regularly report clashes between naval vessels. In 2011, a vessel identifying itself as the Chinese Navy reportedly contacted one of India’s amphibious assault vessels on an open radio channel; the Indian vessel was on a friendly visit to Vietnam, when it was spotted at a distance of 45 nautical miles from the Vietnamese coast in the disputed South China Sea. The Chinese vessel stated that the Indian ship was entering Chinese waters. The spokesperson for the Indian Navy clarified that as no ship or aircraft was visible, the vessel would thus proceed on her onward journey as scheduled. The same year, shortly after China and Vietnam had signed an agreement seeking to contain a dispute over the South China Sea, India’s state-run explorer, Oil and Natural Gas Corporation (ONGC) said that its overseas investment arm, ONGC Videsh Limited, had signed a three-year deal with PetroVietnam for developing long-term cooperation in the oil sector. The ONGC also accepted Vietnam’s offer of exploration in certain specified blocks in the South China Sea. In response, the Chinese Foreign Ministry spokesperson Jiang Yu issued a protest. Vietnam and Japan reached an agreement early in 1978 on the development of oil in the South China Sea. By 2012, Vietnam had concluded some 60 oil and gas exploration and production contracts with various foreign companies. In 2011, Vietnam was the sixth-largest oil producer in the Asia-Pacific region, although the country is a net oil importer. China’s first independently designed and constructed oil drilling platform in the South China Sea was the Ocean Oil 981. It began operation in 2012, 320 kilometers (200 mi) southeast of Hong Kong, employing 160 people. In 2014, the platform was moved near to the Paracel Islands, which propelled Vietnam to state that this move violated their territorial claims. Chinese officials said it was legal, stating the area lies in waters surrounding the Paracel Islands, which China occupies and controls militarily. Other nations besides Vietnam and China have contested for this region. In 2012 and 2013, Vietnam and Taiwan clashed over what Vietnam considered anti-Vietnamese military exercises by Taiwan. Prior to the dispute around the sea areas, fishermen from involved countries tended to enter on each other’s controlled islands and EEZ, which led to conflicts with the authorities that controlled the areas, as they were unaware of the exact borders. Due to the depletion of the fishing resources in their maritime areas, fishermen felt compelled to fish in the neighboring country’s areas. After Joko Widodo became President of Indonesia in 2014, he imposed a policy threatening to destroy the vessels of any foreign fishermen caught illegally fishing in Indonesian waters. Since then, many neighboring countries’ fishing vessels have been blown up by Indonesian authorities. On May 21, 2015, around 41 fishing vessels from China, Vietnam, Thailand, and the Philippines were blown up. On March 19, 2016, China Coast Guard prevented its fishermen from being detained by Indonesian authorities when the Chinese fishermen were caught fishing near the waters around Natuna, leading to a protest by Indonesian authorities. Further Indonesian campaigns against foreign fishermen resulted in 23 fishing boats from Malaysia and Vietnam being blown up on April 5, 2016. The South China Sea had also become known for Indonesian pirates, with frequent attacks on Malaysian, Singaporean, and Vietnamese fishing vessels and for Filipino pirates attacking Vietnamese fishermen. The Association of Southeast Asian Nations (ASEAN), in general, and Malaysia, in particular, have been keen to ensure that the territorial disputes within the South China Sea do not escalate into armed conflicts. Joint Development Authorities have been set up in areas of overlapping claims to jointly develop the area and divide the profits equally, without settling the issue of sovereignty. Generally, China has preferred to resolve competing claims bilaterally, while some ASEAN countries prefer multi-lateral talks, believing that they are disadvantaged in bilateral negotiations with China. ASEAN countries maintain that only multilateral talks could effectively resolve the competing claims because so many countries claim the same territory. For example, the International Court of Justice settled the overlapping claims over Pedra Branca/Pulau Batu Putih, including neighboring Middle Rocks, by Singapore and Malaysia in 2008, awarding Pedra Branca/Pulau Batu Puteh to Singapore and Middle Rocks to Malaysia. An estimated US $5 trillion worth of global trade passes through the South China Sea and there are many non-claimant states that want the South China Sea to remain as international waters. Several states (e.g., the United States) are conducting “freedom of navigation” operations to promote this situation. U.S. and Chinese Positions The United States and China are currently in disagreement over the South China Sea, exacerbated by the fact that the US is not a member of the United Nations Convention on the law of the Sea (the United States recognizes the UNCLOS as a codification of customary international law but has not ratified it). Nevertheless, the U.S. has stood by its claim that “peaceful surveillance activities and other military activities without permission in a country’s exclusive economic zone” are allowed under the convention. In relation to the dispute, former U.S. State Secretary Hillary Clinton voiced her support for fair access by reiterating that freedom of navigation and respect of international law is a matter of national interest to the United States. Clinton testified in support of congressional approval of the Law of the Sea Convention, which would strengthen U.S. ability to support countries that oppose Chinese claims to certain islands in the area. Clinton also called for China to resolve the territorial dispute, but China responded by demanding the U.S. stay out of the issue. China’s Foreign Minister Yang Jiechi stated that the stand was “in effect an attack on China” and warned the United States against making the South China Sea an international or multilateral issue. This came at a time when both countries were engaging in naval exercises in a show of force to the opposing side, which increased tensions in the region. The U.S. Department of Defense released a statement in which it opposed the use of force to resolve the dispute and accused China of assertive behavior. In 2014, the United States responded to China’s claims over the fishing grounds of other nations by stating that “China has not offered any explanation or basis under international law for these extensive maritime claims.” While the US pledged American support for the Philippines in its territorial conflicts with the PRC, the Chinese Foreign Ministry asked the United States to maintain a neutral position on the issue. In 2014 and 2015, the United States continued freedom of navigation operations, including in the South China Sea. In 2015, Secretary of Defense Ash Carter warned China to halt its rapid island-building. In November 2015, two US B-52 strategic bombers flew near artificial Chinese-built islands in the area of the Spratly Islands and were contacted by Chinese ground controllers but continued their mission undeterred. In response to U.S. Secretary of State, Rex Tillerson’s comments on blocking access to Chinese man-made islands in the South China Sea, in January 2017, the Communist Party-controlled Global Times warned of a “large-scale war” between the U.S. and China, noting, “Unless Washington plans to wage a large-scale war in the South China Sea, any other approaches to prevent Chinese access to the islands will be foolish.” The position of China on its maritime claims based on UNCLOS and history has been ambiguous, particularly with the nine-dash line map. For example, in 2011, China stated that it has undisputed sovereignty over the islands and the adjacent waters, suggesting it is claiming sovereignty over its territorial waters, a position consistent with UNCLOS. However, it also stated that China enjoys sovereign rights and jurisdiction over the relevant waters along with the seabed and subsoil contained in this region, suggesting that China is claiming sovereignty over all of the maritime space (includes all the geographic features and the waters within the nine-dash line). China has also repeatedly indicated that the Chinese claims are drawn on a historical basis. The vast majority of international legal experts have concluded that China’s current claims, which are based on historical claims, are invalid. For example, in 2013, the Republic of the Philippines brought an arbitration case against the People’s Republic of China under Annex VII to UNCLOS, concerning certain issues in the South China Sea including the legality of China’s “nine-dash line” claim (Philippines v. China, known also as the South China Sea Arbitration). China declared that it would not participate in the arbitration but in 2015, the arbitral tribunal ruled that it had jurisdiction over the case, taking up seven of the 15 submissions made by the Philippines. In 2016, the tribunal ruled in favor of the Philippines. It clarified that it would not “…rule on any question of sovereignty over land territory and would not delimit any maritime boundary between the Parties.” The tribunal also confirmed that China has “no historical rights” based on the “nine-dash line” map. China has rejected the ruling, as has Taiwan. Contemporary India Over the first two decades of the 21st century, India's economy has expanded, but tensions between its Muslim and Hindu communities have increased as well. Key Terms / Key Concepts 2002 Gujarat riots: a three-day period of inter-communal violence in the western Indian state of Gujarat in 2002 Bharatiya Janata Party: one of the two major political parties in India, along with the Indian National Congress; as of 2017, India’s largest political party in terms of representation in the national parliament and state assemblies Rashtriya Swayamsevak Sangh: a right-wing, Hindu nationalist, paramilitary volunteer organization in India widely regarded as the parent organization of the ruling party of India, the Bharatiya Janata Party; founded in 1925, the world’s largest non-governmental organization that claims commitment to selfless service to India India under Modi India under Modi, a right-wing, nationalistic Prime Minister—has gone through numerous neoliberal reforms that contribute to its impressive economic growth, pleasing businesspeople and industrialists but widening inequalities between the wealthy and the poor and highlighting the ongoing challenges of poverty, corruption, and gender violence. Narendra Modi (b. 1950) is the current Prime Minister of India (as of March 2017), and he has been in office since May 2014. He was the Chief Minister of Gujarat from 2001 to 2014. He is the Member of Parliament for the Varanasi district (Utter Pradesh), a member of the Bharatiya Janata Party (BJP)—which is one of the two major political parties in India. Modi is also a member of the Rashtriya Swayamsevak Sangh (RSS)—a right-wing, Hindu nationalist, paramilitary volunteer organization in India widely regarded as the parent organization of the BJP. Born to a Gujarati family in Vadnagar, Modi helped his father sell tea as a child and later ran his own stall. He was introduced to the RSS at age eight, beginning a long association with the organization. He left home after graduating from school, partly because of an arranged marriage, which he did not accept. Modi traveled around India for two years and visited a number of religious centers. In 1971 he became a full-time worker for the RSS. During the state of emergency imposed across the country in 1975, Modi was forced to go into hiding. The RSS assigned him to the BJP in 1985, and he held several positions within the party hierarchy until 2001, rising to the rank of general secretary. Modi was appointed chief minister of Gujarat in 2001. His administration has been considered complicit in the 2002 Gujarat riots—a three-day period of inter-communal violence. Following this incident, outbreaks of violence in Ahmedabad occurred for three weeks. Statewide, communal riots against the minority Muslim population occurred for three months. According to official figures, the riots resulted in the deaths of 790 Muslims and 254 Hindus. 2,500 people were injured non-fatally and 223 more were reported missing. There were instances of rape, children being burned alive, and widespread looting and destruction of property. Modi has been accused of initiating and condoning the violence, as have police and government officials who allegedly directed the rioters and gave them lists of Muslim-owned properties. In 2012, Modi was cleared of complicity in the violence by a Special Investigation Team (SIT) appointed by the Supreme Court of India. The SIT also rejected claims that the state government had not done enough to prevent the riots. The Muslim community reacted with anger and disbelief. In 2013, allegations were made that the SIT had suppressed evidence, but the Supreme Court expressed satisfaction over the SIT’s investigations. While officially classified as a communalist riot, the events have been described as a pogrom by many scholars. Other observers have stated that these events met the legal definition of genocide and called it an instance of state terrorism or ethnic cleansing. Modi led the BJP in the 2014 general election, which gave the party a majority in the parliament, the first time a single party had achieved this since 1984. Credited with engineering a political realignment towards right-wing politics, Modi remains a figure of controversy, domestically and internationally, over his Hindu nationalist beliefs and his role during the 2002 Gujarat riots, cited as evidence of an exclusionary social agenda. Modi's Hindi nationalist stance threatens to further harm India's relations with neighboring Pakistan with its Muslim majority population, especially considering the wars between India and Pakistan since the divsion of the Indian subcontinent with the end of British occupation. The posssession of nuclear weapons by both India and Pakistan since the 1990s increases the dangers that any future war between these two nations would entail. The economic policies of Modi’s government focused on privatization and liberalization of the economy based on a neoliberal framework. Modi updated India’s foreign direct investment policies to allow more foreign investment in several industries, including defense and the railways. Other reforms included removing many of the country’s labor laws to make it harder for workers to form unions and easier for employers to hire and fire them. These reforms met with support from institutions such as the World Bank, but opposition from scholars within the country. The labor laws also drew strong opposition from unions. The funds dedicated to poverty reduction programs and social welfare measures were greatly decreased by the Modi administration. The government also lowered corporate taxes, abolished the wealth tax, reduced customs duties on gold and jewelry, and increased sales taxes. In 2014, Modi introduced the Make in India initiative to encourage foreign companies to manufacture products in India, with the goal of turning the country into a global manufacturing hub. Supporters of economic liberalization supported the initiative, while critics argued it would allow foreign corporations to capture a greater share of the Indian market. To enable the construction of private industrial corridors, the Modi administration passed a land-reform bill that allowed it to acquire private agricultural land, without conducting social impact assessment and without the consent of the farmers who owned it. The bill was passed via an executive order after it faced opposition in parliament but was eventually allowed to lapse. In 2015, Modi launched a program intended to develop 100 smart cities, which is expected to bring information technology companies an extra benefit of ₹20 billion ($300 million US). Modi also launched the Housing for All By 2022 project, which intends to eliminate slums in India by building about 20 million affordable homes for India’s urban poor. Modi’s government reduced the amount of money spent by the government on healthcare and launched a New Health Policy, which emphasizes the role of private healthcare. This represented a shift away from the policy of the previous Congressional government, which had supported programs to assist public health goals, including reducing child and maternal mortality rates. Modi also launched the Clean India campaign (2014) to eliminate open defecation and manual scavenging. As part of the program, the Indian government began constructing millions of toilets in rural areas and encouraging people to use them. The government also announced plans to build new sewage treatment plants. Modi’s reformist approach has made him very popular with the public. At the end of his first year in office, he received an overall approval rating of 87% in a Pew Research poll, with 68% of people rating him “very favorably” and 93% approving of his government. At the end of his second year in office, an updated Pew Research poll showed Modi continued to receive high overall approval ratings of 81%, with 57% of those polled rating him “very favorably.” In naming his cabinet, Modi renamed the Ministry of Environment and Forests the Ministry of Environment, Forests, and Climate Change. In the first budget of the government, the money allotted to this ministry was reduced by more than 50%. The new ministry also removed or diluted a number of laws related to environmental protection. These included no longer requiring clearance from the National Board for Wildlife for projects close to protected areas and allowing certain projects to proceed before environmental clearance was received. Modi also relaxed or abolished a number of other environmental regulations, particularly those related to industrial activity. A government committee stated that the existing system only created corruption and that the government should instead rely on the owners of industries to voluntarily inform the government about the pollution they were creating. In addition, Modi lifted a moratorium on new industrial activity in the most polluted areas. The changes were welcomed by businesspeople but criticized by environmentalists. Attributions Title Image Indian Prime Minister, Shri Narendra Modi, 2015. Prime Minister's Office, Government of India, GODL-India <https://data.gov.in/sites/default/files/Gazette_Notification_OGDL.pdf>, via Wikimedia Commons Adapted from: https://courses.lumenlearning.com/boundless-worldhistory/chapter/east-asia-in-the-21st-century/
oercommons
2025-03-18T00:35:09.502445
Neil Greenwood
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/88116/overview", "title": "Statewide Dual Credit World History, The Catastrophe of the Modern Era: 1919-Present CE, Chapter 17: Post-Cold War International Structure, Contemporary China and India", "author": "Anna McCollum" }
https://oercommons.org/courseware/lesson/88106/overview
The Global War on Terror Overview War on Terror On September 11, 2001 ("9/11"), the United States was struck by a terrorist attack when 19 al-Qaeda hijackers commandeered four airliners to be used in suicide attacks. They intentionally crashed two into both twin towers of the World Trade Center and a third into the Pentagon, killing 2,937 victims—206 aboard the three airliners, 2,606 who were in the World Trade Center and on the ground, and 125 who were in the Pentagon. The fourth plane was re-taken by the passengers and crew of the aircraft. While they were not able to land the plane safely, they were able to re-take control of the aircraft and crash it into an empty field in Pennsylvania; this did kill all 44 people on board, including the four terrorists, and this heroic action saved whatever target those terrorists were aiming for. All in all, a total of 2,977 people perished in the attacks. Learning Objectives Analyze the international structure that emerged in the post-Cold War era. Key Terms / Key Concepts al-Qaeda: a militant Sunni Islamist multi-national organization founded in 1988 by Osama bin Laden, Abdullah Azzam, and several other Arab volunteers who fought against the Soviet invasion of Afghanistan in the 1980s; widely designated as a terrorist group Taliban: a Sunni Islamic fundamentalist political movement in Afghanistan currently waging war (an insurgency, or jihad) within that country; a group that uses terrorism as a specific tactic to further their ideological and political goals “War on Terror”: designation for the U.S. government’s operations against al-Qaeda and the Taliban, among other terrorist organizations, in the wake of the 11 September 2001 al-Qaeda terrorist attacks In response to what is now referred to as 9/11, President George W. Bush on September 20 announced a “War on Terror,” focusing on al-Qaeda and the Taliban, along with the groups and countries that assisted them, which included Afghanistan and Iraq. On October 7, 2001, the United States and NATO then invaded Afghanistan to oust the Taliban regime, which had provided safe haven to al-Qaeda and its leader Osama bin Laden. The wars in Afghanistan and Iraq failed to stabilize the political situation in the Middle East and contributed to ongoing civil conflicts, with counterterrorism experts arguing that they created circumstances beneficial to the escalation of radical Islamism. The U.S. government also took steps in the U.S. to prevent future attacks. The controversial USA PATRIOT Act increased the government's power to monitor communications and removed legal restrictions on information sharing between federal law enforcement and intelligence services. A cabinet-level agency called the Department of Homeland Security was created to lead and coordinate federal counter-terrorism activities. Some of these anti-terrorism efforts, particularly the U.S. government's handling of detainees at the prison at Guantanamo Bay, led to allegations against the U.S. government of human rights violations. Iraq War Although explicitly stating that Iraq had "nothing" to do with 9/11, President George W. Bush consistently referred to the Iraq war as “the central front in the war on terror” and argued that if the United States pulled out of Iraq, “terrorists will follow us here.” The reasons for the invasion cited by the Bush administration included the spreading of democracy, the elimination of weapons of mass destruction, and the liberation of the Iraqi people. The Bush administration based its rationale for the war principally on the assertion that Iraq possessed weapons of mass destruction (WMDs) and that the Iraqi government posed an immediate threat to the United States and its coalition allies. Select U.S. officials accused Sadam Hussein—the then leader of Iraq—of harboring and supporting al-Qaeda, while others cited the desire to end his repressive dictatorship and bring democracy to the people of Iraq. Learning Objectives Analyze the international structure that emerged in the post-Cold War era. Key Terms / Key Concepts Iraq War: 2003-11 conflict between the U.S. and Iraq, at the center of the U.S. "War on Terror". The U.S. invasion of Iraq began this war, and it was subsequently marked by the overthrow of Saddam Hussein, replaced by a Shia-led parliamentary republic, and an insurgency against this new government, along with the continuing U.S. military presence in Iraq, which ended in 2011. The Bush Administration began the Iraq War in March 2003, with the United States, joined by the United Kingdom and several coalition allies, invading Iraq; this prefaced the so-called “shock and awe” bombing campaign. Iraqi forces were quickly overwhelmed as U.S. forces swept through the country. The invasion led to the collapse of the Ba'athist government (under the rule of the Arab Socialist Ba'ath Party). President Saddam Hussein was captured during Operation Red Dawn in December 2003 and executed by a military court three years later. After the invasion, no substantial evidence was found to verify the initial claims about WMDs. And the rationale and misrepresentation of pre-war intelligence would later face heavy criticism within the U.S. and internationally. In the aftermath of the invasion, Iraq held multi-party elections in 2005. Nouri al-Maliki became Prime Minister in 2006 and remained in office until 2014. The al-Maliki government enacted policies that were widely seen as having the effect of alienating the country's Sunni minority and worsening sectarian tensions. These policies and the fact that Iraq was a “nation” cobbled together by WWI allies out of already warring factions, combined with the power vacuum following Saddam's demise and the mismanagement of the occupation, led to widespread sectarian violence between Shias and Sunnis, as well as a lengthy insurgency against U.S. and coalition forces. The United States responded with a troop surge in 2007. Despite some initial successes early in the invasion, the continued Iraq War fueled international protests and gradually saw a decline in US domestic support, as many people began to question whether or not the invasion was worth the cost. While proponents of the war outside of the Bush Administration regularly echoed the assertion that this would reduce the chances of terrorists coming to the U.S., as the conflict dragged on, members of the U.S. Congress, the U.S. public, and even U.S. troops questioned the connection between Iraq and the fight against anti-U.S. terrorism. In particular, a consensus developed among intelligence experts that the Iraq war actually increased terrorism. Counterterrorism expert Rohan Gunaratna frequently referred to the invasion of Iraq as a “fatal mistake.” London's International Institute for Strategic Studies concluded in 2004 that the occupation of Iraq had become “a potent global recruitment pretext” for radical Muslim fighters and that the invasion “galvanized” al-Qaeda and “perversely inspired insurgent violence.” The U.S. National Intelligence Council concluded in a 2005 report that the war in Iraq had become a breeding ground for a new generation of terrorists. David Low, the national intelligence officer for transnational threats, indicated that the report concluded that the war in Iraq provided terrorists with “a training ground, a recruitment ground, the opportunity for enhancing technical skills... There is even, under the best scenario, over time, the likelihood that some of the jihadists who are not killed there will, in a sense, go home, wherever home is, and will therefore disperse to various other countries.” The Council's chairman Robert Hutchings noted, “At the moment, Iraq is a magnet for international terrorist activity.” The 2006 National Intelligence Estimate, which outlined the considered judgment of all 16 U.S. intelligence agencies, concluded that “the Iraq conflict has become the 'cause célèbre' for jihadists, breeding a deep resentment of U.S. involvement in the Muslim world and cultivating supporters for the global jihadist movement.” In 2008, the unpopularity of President Bush and the Iraq War, along with the 2008 financial crisis, contributed to the election of Barack Obama, who questioned the Iraq War. After his election, Obama reluctantly continued the war effort in Iraq until August 31, 2010, when he declared that combat operations had ended. However, 50,000 American soldiers and military personnel were kept in Iraq to assist Iraqi forces, help protect withdrawing forces, and work on counter-terrorism until December 15, 2011, when the war was declared formally over and the last troops left the country. Aftermath of 2011 Withdrawal from Iraq The invasion and occupation of Iraq led to sectarian violence, which caused widespread displacement among Iraqi civilians. The Iraqi Red Crescent organization estimated the total internal displacement was around 2.3 million in 2008, and as many as 2 million Iraqis left the country. The invasion preserved the autonomy of the Kurdish region and because the Kurdish region is historically the most democratic area of Iraq, many Iraqi refugees from other territories fled into the Kurdish land. Iraqi insurgency surged in the aftermath of the U.S. withdrawal. Terror campaigns involving Iraqi (primarily radical Sunni) anti-government rebel groups and various factions within Iraq escalated. The events of post U.S. withdrawal have shown patterns raising concerns that the surging violence might slide into another civil war. By mid-2014, the country was in chaos with a new government yet to be formed following national elections and the insurgency reaching new heights. In early June 2014, the ISIL (ISIS) took over the cities of Mosul and Tikrit and stated it was ready to march on Baghdad, while Iraqi Kurdish forces took control of key military installations in the major oil city of Kirkuk. Prime Minister Nouri al-Maliki asked his parliament to declare a state of emergency that would give him increased powers, but the lawmakers refused. In the summer of 2014 President Obama announced the return of U.S. forces to Iraq, but only in the form of aerial support, in an effort to halt the advance of ISIS forces, render humanitarian aid to stranded refugees, and stabilize the political situation. In August 2014, Prime Minister Nouri al-Maliki succumbed to pressure at home and abroad to step down. This paved the way for Haidar al-Abadi to take over. In what was claimed to be revenge for the aerial bombing ordered by President Obama, ISIS, which by this time had changed its name to the Islamic State, beheaded an American journalist, James Foley, who had been kidnapped two years earlier. Despite U.S. bombings and breakthroughs on the political front, Iraq remained in chaos with the Islamic State consolidating its gains and sectarian violence continuing unabated. Consequences of the Iraq War Various scientific surveys of Iraqi deaths resulting from the first four years of the Iraq War estimated that between 151,000 and over one million Iraqis died as a result of the conflict during this time. A later study, published in 2011, estimated that approximately 500,000 Iraqis had died as a result of the conflict since the invasion. For troops in the U.S.-led multinational coalition, the death toll is carefully tracked and updated daily. A total of 4,491 U.S. service members were killed in Iraq between 2003 and 2014. Regarding the Iraqis, however, information on both military and civilian casualties is both less precise and less consistent. The Iraq War caused hundreds of thousands of civilian and thousands of military casualties. The majority of casualties occurred as a result of the insurgency and civil conflicts between 2004 and 2007. The war destroyed the country and resulted in a humanitarian crisis. The child malnutrition rate rose to 28%. Some 60 – 70% of Iraqi children were reported to be suffering from psychological problems in 2007. Most Iraqis had no access to safe drinking water; a cholera outbreak in northern Iraq was thought to be the result of poor water quality. As many as half of Iraqi doctors left the country between 2003 and 2006. Poverty led many Iraqi women to turn to prostitution to support themselves and their families, attracting sex tourists from regional lands. The use of depleted uranium and white phosphorus by the U.S. military has been blamed for birth defects and cancers in the Iraqi city of Fallujah. By the end of 2015, according to the Office of the United Nations High Commissioner for Refugees, 4.4 million Iraqis had been internally displaced. The population of Iraqi Christians dropped dramatically during the war, from 1.5 million in 2003 to perhaps only 275,000 in 2016. The Foreign Policy Association reported that "“the most perplexing component of the Iraq refugee crisis” was that the U.S. has accepted only around 84,000 Iraqi refugees. Throughout the entire Iraq war, there have been human rights abuses on all sides of the conflict. Arguably the most controversial incident was a series of human rights violations against detainees in the Abu Ghraib prison in Iraq. These violations perpetrated by American soldiers included physical and sexual abuse, torture, rape, sodomy, and murder. The abuses came to widespread public attention with the publication of photographs of the abuse by CBS News in April 2004. The incidents received widespread condemnation both within the United States and abroad, although the soldiers received support from some conservative media within the United States. The administration of George W. Bush attempted to portray the abuses as isolated incidents, not indicative of general U.S. policy. This was contradicted by humanitarian organizations such as the Red Cross, Amnesty International, and Human Rights Watch. After multiple investigations, these organizations stated that the abuses at Abu Ghraib were not isolated incidents; rather, they were part of a wider pattern of torture and brutal treatment at American overseas detention centers, including those in Iraq, Afghanistan, and Guantanamo Bay. Several scholars stated that the abuses constituted state-sanctioned crimes. Afghanistan War The United States invasion of Afghanistan occurred after the September 11 attacks in late 2001, overlapping the 2003 U.S. invasion of Iraq. President Bush demanded that the Taliban hand over Osama bin Laden and expel al-Qaeda from Afghanistan. The Taliban government refused to extradite him (or others sought by the U.S.) without evidence of his involvement in the 9/11 attacks. The request was dismissed by the U.S. as a meaningless delaying tactic, and on October 7, 2001 it launched Operation Enduring Freedom with the United Kingdom. The two were later joined by other forces, including the Afghan Northern Alliance that had been fighting the Taliban in the ongoing civil war since 1996. In December 2001, the United Nations Security Council established the International Security Assistance Force (ISAF) to assist the Afghan interim authorities with securing Kabul. At the Bonn Conference the same month, Hamid Karzai was selected to head the Afghan interim administration, which after a 2002 loya jirga (Pashto for “grand assembly”) in Kabul became the Afghan transitional administration. In the popular elections of 2004, Karzai was elected president of the country, then named the Islamic Republic of Afghanistan. NATO became involved in ISAF in 2003 and later that year assumed leadership of its troops from 43 countries. NATO members provided the core of the force. One portion of U.S. forces in Afghanistan operated under NATO command. The rest remained under direct U.S. command. Learning Objectives Analyze the international structure that emerged in the post-Cold War era. Key Terms / Key Concepts al-Qaeda: a militant Sunni Islamist multi-national organization founded in 1988 by Osama bin Laden, Abdullah Azzam, and several other Arab volunteers who fought against the Soviet invasion of Afghanistan in the 1980s; widely designated as a terrorist group Taliban: a Sunni Islamic fundamentalist political movement in Afghanistan currently waging war (an insurgency, or jihad) within that country; a group that uses terrorism as a specific tactic to further their ideological and political goals Iraq War: 2003-11 conflict between the U.S. and Iraq, at the center of the U.S. "War on Terror". The U.S. invasion of Iraq began this war, and it was subsequently marked by the overthrow of Saddam Hussein, replaced by a Shia-led parliamentary republic, and an insurgency against this new government, along with the continuing U.S. military presence in Iraq, which ended in 2011. The Taliban was reorganized by its leader Mullah Omar, and in 2003 it launched an insurgency against the government and ISAF. Although outgunned and outnumbered, insurgents from the Taliban and other radical groups have waged asymmetric warfare with guerrilla raids and ambushes in the countryside, suicide attacks against urban targets, and turncoat killings against coalition forces. The Taliban exploited weaknesses in the Afghan government, among the most corrupt in the world, to reassert influence across rural areas of southern and eastern Afghanistan. In the initial years, there was little fighting but from 2006 the Taliban made significant gains and showed an increased willingness to commit atrocities against civilians. Violence sharply escalated from 2007 to 2009. While ISAF continued to battle the Taliban insurgency, fighting crossed into neighboring northwestern Pakistan. The Narang night raid was a raid on a household in the village of Ghazi Khan in the early morning hours of December 27, 2009. The operation was authorized by NATO and resulted in the death of ten Afghan civilians, most of whom were students and some of whom were children. The status of the deceased was initially in dispute, with NATO officials claiming the dead were Taliban members found with weapons and bomb-making materials, while some Afghan government officials and local tribal authorities asserted they were civilians. On May 2, 2011, United States Navy SEALs killed Osama bin Laden in Abbotabad, Pakistan. A year later, NATO leaders endorsed an exit strategy for withdrawing their forces. UN-backed peace talks have since taken place between the Afghan government and the Taliban. In May 2014, the United States announced that its major combat operations would end in December and that it would leave a residual force in the country. In October 2014, British forces handed over the last bases in Helmand to the Afghan military, officially ending their combat operations in the war. In December 2014, NATO formally ended combat operations in Afghanistan and transferred full security responsibility to the Afghan government. Aftermath and Consequences of the U.S. Invasion of Afghanistan Although there was a formal end to combat operations, partially because of improved relations between the United States and the new President Ashraf Ghani, American forces increased raids against Islamic militants and terrorists, justified by a broad interpretation of protecting American forces. In March 2015, it was announced that the United States would maintain almost ten thousand service members in Afghanistan until at least the end of 2015, a change from planned reductions. In October 2015, the Obama administration announced that U.S. troops would remain in Afghanistan past the original planned withdrawal date of December 31, 2016. U.S. forces continued to conduct airstrikes and special operations raids, while Afghan forces were losing ground to Taliban forces in some regions. This continuing U.S. presence in Afghanistan was unpopular with people in both the U.S. and Afghanistan. Consequently, in 2020 – 21 the U.S. carried out a withdrawal of its forces from Afghanistan. War casualty estimates vary significantly. According to a UN report, the Taliban were responsible for 76% of civilian casualties in Afghanistan in 2009. In 2011, a record over three thousand civilians were killed, the fifth successive annual rise. According to a UN report, in 2013 there were nearly three thousand civilian deaths, with 74% blamed on anti-government forces. A report titled Body Count put together by Physicians for Social Responsibility, Physicians for Global Survival, and the Nobel Peace Prize-winning International Physicians for the Prevention of Nuclear War (IPPNW) concluded that 106,000 – 170,000 civilians have been killed as a result of the fighting in Afghanistan at the hands of all parties to the conflict. According to the Watson Institute for International Studies Costs of War Project, 21,000 civilians have been killed as a result of the war. An estimated 96% of Afghans have been affected either personally by or from the wider consequences of the war. Since 2001, more than 5.7 million former refugees have returned to Afghanistan but 2.2 million others remained refugees in 2013. In 2013, the UN estimated that 547,550 were internally displaced persons, a 25% increase over the 2012 estimates. From 1996 to 1999, the Taliban had controlled 96% of Afghanistan's poppy fields and made opium its largest source of revenue. Taxes on opium exports became one of the mainstays of Taliban income. By 2000, Afghanistan accounted for an estimated 75% of the world's opium supply. The Taliban leader Mullah Omar then banned opium cultivation and production dropped. Some observers argue that the ban was issued only to raise opium prices and increase profit from the sale of large existing stockpiles. The trafficking of accumulated stocks continued in 2000 and 2001. Soon after the invasion, opium production increased markedly. By 2005, Afghanistan was producing 90% of the world's opium, most of which was processed into heroin and sold in Europe and Russia. In 2009, the BBC reported that “UN findings say an opium market worth $65bn funds global terrorism, caters to 15 million addicts, and kills 100,000 people every year.” War crimes have been committed by both sides: civilian massacres, bombings of civilian targets, terrorism, use of torture, and the murder of prisoners of war. Additional common crimes include theft, arson, and the destruction of property not warranted by military necessity. The Afghanistan Independent Human Rights Commission (AIGRC) called the Taliban's terrorism against the Afghan civilian population a war crime. According to Amnesty International, the Taliban commit war crimes by targeting civilians, including killing teachers, abducting aid workers, and burning school buildings. The organization reported that up to 756 civilians were killed in 2006 by bombs, mostly on roads or carried by suicide attackers belonging to the Taliban. NATO has also alleged that the Taliban has used civilians as human shields. In 2009, the U.S. confirmed that Western military forces in Afghanistan used white phosphorus as a weapon to illuminate targets or as an incendiary to destroy bunkers and enemy equipment; this has been condemned by human rights organizations as cruel and inhumane because it causes severe burns. U.S. forces used white phosphorus to screen a retreat in the Battle of Ganjgal when regular smoke munitions were not available. White phosphorus burns on the bodies of civilians wounded in clashes near Bagram were confirmed. The U.S. claims at least 44 instances in which militants have used white phosphorus in weapons or attacks. The 2001 – 21 Afghanistan War and the 2003 – 11 Iraq War were the two major efforts in the U.S.’s “War on Terrorism.” These two wars illustrated the challenges in trying to stop terrorism by radicalized religious groups with conventional military efforts conducted by coalitions of nation-states. These wars also illustrated the vulnerability of all societies to radicalization from terrorist groups, such as al-Qaeda, the Taliban, and the Proud Boys. Attributions Images Courtesy of Wikipedia Commons Title Image - War on Terror Montage. Attribution: Derivative work: PoxnarAll four pictures in the montage are taken by the US Army/Navy., Public domain, via Wikimedia Commons. Provided by: Wikipedia Commons. Location: https://commons.wikimedia.org/wiki/File:War_on_Terror_montage1.png. License: Creative Commons CC0 License. Boundless World History "The Middle East and North Africa in the 21st Century" CC LICENSED CONTENT, SHARED PREVIOUSLY Curation and Revision. Provided by: Boundless.com. License: CC BY-SA: Attribution-ShareAlike CC LICENSED CONTENT, SPECIFIC ATTRIBUTION History of the United States (1991u20132008). Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/History_of_the_United_States_(1991-2008). License: CC BY-SA: Attribution-ShareAlike WTC_smoking_on_9-11.jpeg. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/History_of_the_United_States#/media/File:WTC_smoking_on_9-11.jpeg. License: CC BY-SA: Attribution-ShareAlike Freedom of the Press (report). Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Freedom_of_the_Press_(report). License: CC BY-SA: Attribution-ShareAlike Democracy in the Middle East. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Democracy_in_the_Middle_East. License: CC BY-SA: Attribution-ShareAlike Freedom in the World. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Freedom_in_the_World. License: CC BY-SA: Attribution-ShareAlike Human rights in the Middle East. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Human_rights_in_the_Middle_East. License: CC BY-SA: Attribution-ShareAlike Women in Arab societies. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Women_in_Arab_societies. License: CC BY-SA: Attribution-ShareAlike Politics of Iraq. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Politics_of_Iraq. License: CC BY-SA: Attribution-ShareAlike 800px-Iraqi_voters_inked_fingers.jpg. Provided by: Wikipedia. Located at: https://commons.wikimedia.org/wiki/File:Iraqi_voters_inked_fingers.jpg. License: CC BY-SA: Attribution-ShareAlike Hadith. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Hadith. License: CC BY-SA: Attribution-ShareAlike Islam and modernity. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Islam_and_modernity. License: CC BY-SA: Attribution-ShareAlike Salafi movement. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Salafi_movement. License: CC BY-SA: Attribution-ShareAlike Muslim Brotherhood. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Muslim_Brotherhood. License: CC BY-SA: Attribution-ShareAlike Al-Qaeda. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Al-Qaeda. License: CC BY-SA: Attribution-ShareAlike Hezbollah. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Hezbollah. License: CC BY-SA: Attribution-ShareAlike Islamic fundamentalism. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Islamic_fundamentalism. License: CC BY-SA: Attribution-ShareAlike Jihad. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Jihad. License: CC BY-SA: Attribution-ShareAlike Sharia. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Sharia. License: CC BY-SA: Attribution-ShareAlike Taliban. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Taliban. License: CC BY-SA: Attribution-ShareAlike Islamic revival. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Islamic_revival. License: CC BY-SA: Attribution-ShareAlike Islamic State of Iraq and the Levant. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Islamic_State_of_Iraq_and_the_Levant. License: CC BY-SA: Attribution-ShareAlike Hamas. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Hamas. License: CC BY-SA: Attribution-ShareAlike Islamism. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Islamism. License: CC BY-SA: Attribution-ShareAlike Taliban. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Taliban. License: CC BY-SA: Attribution-ShareAlike War in Afghanistan (2015u2013present). Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/War_in_Afghanistan_(2015%E2%80%93present). License: CC BY-SA: Attribution-ShareAlike Ba'athist Iraq. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Ba'athist_Iraq. License: CC BY-SA: Attribution-ShareAlike Iraq War. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Iraq_War. License: CC BY-SA: Attribution-ShareAlike Casualties of the Iraq War. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Casualties_of_the_Iraq_War. License: CC BY-SA: Attribution-ShareAlike War in Afghanistan (2001u20132014). Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/War_in_Afghanistan_(2001%E2%80%932014). License: CC BY-SA: Attribution-ShareAlike Operation Enduring Freedom. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Operation_Enduring_Freedom. License: CC BY-SA: Attribution-ShareAlike Northern Alliance. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Northern_Alliance. License: CC BY-SA: Attribution-ShareAlike 2002 loya jirga. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/2002_loya_jirga. License: CC BY-SA: Attribution-ShareAlike War on Terror. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/War_on_Terror. License: CC BY-SA: Attribution-ShareAlike Narang night raid. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Narang_night_raid. License: CC BY-SA: Attribution-ShareAlike Arab Spring. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Arab_Spring. License: CC BY-SA: Attribution-ShareAlike Impact of the Arab Spring. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Impact_of_the_Arab_Spring. License: CC BY-SA: Attribution-ShareAlike Joint Plan of Action. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Joint_Plan_of_Action. License: CC BY-SA: Attribution-ShareAlike Iran nuclear deal framework. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Iran_nuclear_deal_framework. License: CC BY-SA: Attribution-ShareAlike Nuclear program of Iran. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Nuclear_program_of_Iran. License: CC BY-SA: Attribution-ShareAlike Islamic Revolutionary Guard Corps. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Islamic_Revolutionary_Guard_Corps. License: CC BY-SA: Attribution-ShareAlike Iranu2013Israel relations. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Iran%E2%80%93Israel_relations. License: CC BY-SA: Attribution-ShareAlike P5+1. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/P5%2B1. License: CC BY-SA: Attribution-ShareAlike Iranu2013United States relations. Provided by: WIkipedia. Located at: https://en.wikipedia.org/wiki/Iran%E2%80%93United_States_relations. License: CC BY-SA: Attribution-ShareAlike Sanctions against Iran. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Sanctions_against_Iran. License: CC BY-SA: Attribution-ShareAlike Negotiations leading to the Joint Comprehensive Plan of Action. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Negotiations_leading_to_the_Joint_Comprehensive_Plan_of_Action. License: CC BY-SA: Attribution-ShareAlike Joint Comprehensive Plan of Action. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Joint_Comprehensive_Plan_of_Action. License: CC BY-SA: Attribution-ShareAlike History of Iran. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/History_of_Iran. License: CC BY-SA: Attribution-ShareAlike Treaty on the Non-Proliferation of Nuclear Weapons. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Treaty_on_the_Non-Proliferation_of_Nuclear_Weapons. License: CC BY-SA: Attribution-ShareAlike 800px-Iraqi_voters_inked_fingers.jpg. Provided by: Wikipedia. Located at: https://commons.wikimedia.org/wiki/File:Iraqi_voters_inked_fingers.jpg. License: CC BY-SA: Attribution-ShareAlike Saudi_soldiers_Mecca_1979.JPG. Provided by: Wikipedia. Located at: https://en.wikipedia.org/wiki/Grand_Mosque_seizure#/media/File:Saudi_soldiers,_Mecca,_1979.JPG. License: CC BY-SA: Attribution-ShareAlike
oercommons
2025-03-18T00:35:09.548259
Neil Greenwood
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/88106/overview", "title": "Statewide Dual Credit World History, The Catastrophe of the Modern Era: 1919-Present CE, Chapter 17: Post-Cold War International Structure, The Global War on Terror", "author": "Anna McCollum" }
https://oercommons.org/courseware/lesson/87943/overview
Colonization of Africa Overview The "Scramble" for Africa The Industrial Revolution resulted in Western Europe’s shift from agrarian societies into urban, industrialized countries. Increasingly, England, France, Germany, and a host of other Western European nations needed natural resources to continue fueling their industrialization. Coal, mineral, and wood resources within their own boundaries were becoming scarcer. Across the Mediterranean Sea, though, rested a continent that had seemingly inexhaustible natural resources: Africa. In the late 1800s, Western European nations launched “civilizing missions” to Africa to explore its resources. Rivalry exploded between European nations, as each hurried to colonize large swaths of Africa. Exploration of the continent soon turned to exploitation and violence. Tragically, these events occurred at the moment when African nations were starting to industrialize. European colonization employed such tremendous violence that African infrastructure was crushed and hopes of modernization dashed. Learning Objectives Examine the impact of colonization on Africans. Analyze European motivations for colonization. Compare and contrast the different ways in which different European nations carried out colonization. Key Terms / Key Concepts Belgian Congo: name of the colony after Congo’s administration was taken over by the Belgian parliament Berlin Conference: 1884 conference between major European powers that divided Africa into colonies Boer: Farmers with Dutch ancestry who lived in South Africa Congo Free State: colony created in the late 1800s by King Leopold II of Belgium to harvest rubber and gum trees First Boer War: conflict between the British and Boers that ended in a Boer victory Force Publique: a military and police force organized and operated in the Congo, at the behest of King Leopold II, and later the Belgian government French Algeria: France’s most important African colony German Southwest Africa: the German colony where the genocide of the Herero and Nama peoples occurred Gold Coast: British colony in present-day Ghana Herero and Nama: two indigenous groups in German Southwest Africa who were nearly exterminated by German polices in the early 1900s Kaiser Wilhelm II: Emperor of Germany who wanted to expand Germany’s influence on the global stage King Leopold II: Belgian king known for his atrocious exploitation of the Congolese people under the Congo Free State Mahgreb: northwest Africa Maxim-gun: first reliable machine gun Nigeria: the Federal Republic of Nigeria is a West African country that was once a British colony famous for its palm oil production; independence from Britain was declared in 1960 Palm oil: an essential commodity in Europe to produce soaps and machinery lubricants Scramble for Africa: European countries rush to colonize Africa in the late 1800s Second Boer War: conflict between the British and Boers that end in a British victory Social Darwinism: the pseudo-science theory that individuals, groups, and peoples are subject to the same Darwinian laws of natural selection as plants and animals, which equates to “only the strong survive” Sphere of influence: an area in which one country has power to affect the development of other areas Suez Canal: waterway in Egypt that connects the Mediterranean and Red Seas Transvaal: province of South Africa inhabited by the Boers between 1910 – 1994 Weltpolitik: policy developed by Kaiser Wilhelm II that argued Germany should be involved in world politics Africa on the Eve of Modernization: 1860s – 1870s One of the greatest tragedies of the “Scramble for Africa” that occurred in the 1880s to early 1900s, is that just prior to the European mad-grab, African nations across the continent were on the eve of modernization. Large-scale wars had mostly ceased. The Atlantic Slave Trade had ended, and by extension, slavery itself was virtually extinguished. Life expectancy was extended, a result of improved diet and reduction in disease. Simultaneously, many countries experienced significant population growth. In the 1860s and 1870s, many African nations seemed to be on the verge of transforming their societies into industrialized, developed countries. Economically, Africa nations prospered from the development of strong trade routes across the continent. With relative peace at hand, traders from Angola, Kenya, Tanzania, and Mozambique began exploring and trading across East Africa. Still, other voyagers braved traveling and trading across the Saharan Trade Route. And many African traders made extensive use of one of the continents greatest resources: its rivers. The Nile, White Nile, and Congo Rivers all became superhighways for trade and exploration. Relatively friendly relations between most African nations emerged from the advancements in trade and exploration. Goods such as ivory, grains, wines, and precious stones were exchanged. And from this exchange arose new social structures—ones that included an African middle class comprising of traders and merchants. Like all exchanges, the development of trade and exchange across Africa also helped the dissemination of languages, cultural customs, and beliefs. During this period, African kingdoms started to dissolve, too. In their place emerged nations that were increasingly centralized. Among these were Ethiopia, Egypt, and Madagascar. In these new, centralized states, there was also a dramatic increase in the emphasis on democratic ideas, as well as the push for improved and equal education. Ghana, Nigeria, and Liberia all enacted legislature that called for the election of government officials. In Ghana, a constitution was written that included the right of education for all children, as well as the development of resources to promote unity among its people. Increasingly, schools were built so that even poorer children could receive some education. In much of the rest of Africa, an intellectual revolution occurred. It introduced the “educated African elite.” Tragically, what most Africans lacked was the benefit of an industrial revolution. Technologically, Africa lagged far behind their European counterparts, which means that commercially they did not have the machines that could produce in a competitive manner. Largely, they remained unaware of the actual scope of technological development in Europe, including advancements in weaponry and medicines that could fight diseases. When the Europeans set their minds to colonization, in most cases the Africans could not long resist them because of this lag in technology, industrialization, and medicine. Involvement in Africa before 1884 Early European expeditions concentrated on colonizing previously uninhabited islands—such as the Cape Verde Islands and São Tomé Island—or establishing coastal forts. These forts often developed areas of influence along coastal strips. But they did not venture into the mainland and the vast interior of Africa was little-known to Europeans until the late 19th century. Technological advancements—such as railways, telegraphs, and steam navigation—facilitated European expansion overseas. Medical advances also were important, especially medicines for tropical diseases. The development of quinine, an effective treatment for malaria, enabled vast expanses of the tropics to be accessed by Europeans, because they no longer faced certain severe illness or death from insect-inflicted illnesses. African Colonization in the 19th Century By the mid-19th century, Europeans considered Africa to be a disputed territory ripe for colonization. On a practical level, Europeans needed to colonize Africa for its wealth of natural resources—essential in keeping industries thriving. Psychologically, middle-class Western Europeans also believed in Social Darwinism—the belief that Darwin’s theory of natural selection could be applied to people, which equated to an acceptance that “only the strong survive.” It was a trendy, horribly inaccurate and unscientific way of explaining why some humans prospered and others did not (that some fallaciously adhere to even today). Western Europeans increasingly used this theory, started by Herbert Spencer, to argue that they were wealthier than people in Africa and Asia because they were inherently smarter and more industrious, as well as because they were white. By the end of the 1800s, this pseudo-social science, despite its inherent racism, increased in popularity among European heads of state, and they used it as justification for their imperialist practices. In 1876, King Leopold II of Belgium invited British-American explorer, Henry Morton Stanley to join him in researching and “civilizing” Africa. At the time of the invitation, Stanley was already internationally renowned for his explorations in Zanzibar, and for his “discovery” of the English explorer, David Livingstone, who had searched for the source of the Nile River, then allegedly vanished. In 1871, Stanley encountered the “missing” explorer near Lake Tanganyika. Famously, he greeted Livingstone by asking, “Dr. Livingstone, I presume?” Overnight, Stanley’s fame exploded internationally, and he became an international hero. In 1876, Stanley accepted King Leopold’s invitation. Two years later, he embarked on an extended voyage to the Congo (1878-1885). In 1885, Stanley returned to the Congo, not as a reporter but as an envoy from Leopold with the secret mission to create what would become known as the Congo Free State. French intelligence discovered Leopold’s plans, and France quickly engaged in its own colonial exploration. Portugal also claimed the area. Italy, Britain, Spain, and Germany all soon became involved in the carving up of Africa. Berlin Conference This rapid increase in the exploration and colonization of Africa eventually led to the 1884 Berlin Conference. Established empires—notably Britain, Portugal, and France—had already claimed vast areas of Africa and Asia, and emerging imperial powers like Italy and Germany had done likewise on a smaller scale. With the dismissal of the aging Chancellor Bismarck by Kaiser Wilhelm II, the relatively orderly colonization became a frantic scramble, known as the Scramble for Africa. The Berlin Conference, initiated to establish international guidelines for the acquisition of African territory, formalized this “New Imperialism.” The Berlin Conference sought to end competition and conflict between European powers during the “Scramble for Africa” by establishing international protocols for colonization. Tragically, the Africans had no voice in the proceedings. Europeans neither sought their opinions nor invited them to the Conference. The conference was convened on Saturday, November 15, 1884. The main dominating powers of the conference were France, Germany, Great Britain, and Portugal. They remapped Africa without considering the cultural and linguistic borders that were already established. At the end of the conference, Africa was divided into 50 colonies. And the attendants established who was in control of each of these new divisions. Between the Franco-Prussian War (1871) and the World War I (1914), Western Europe added almost 9 million square miles—one-fifth of the land area of the globe—to its overseas colonial possessions by claiming land in Africa. Consequences of the Conference The Scramble for Africa sped up after the Conference since even within areas designated as their spheres of influence, the European powers had to take possession. In central Africa in particular, expeditions were dispatched to coerce traditional rulers into signing treaties, using force if necessary. Bedouin- and Berber-ruled states in the Sahara and Sub-Sahara were overrun by the French in several wars by the beginning of World War I. The British conquered territories from Egypt to South Africa. After defeating the Zulu Kingdom in South Africa in 1879, they moved on to subdue and dismantle the independent Boer republics of Transvaal and Orange Free State. By 1902, 90% of all African land was under European control. The large part of the Sahara was French, while Sudan remained firmly under joint British-Egyptian rulership. Egypt, itself, was under British occupation before becoming a British protectorate in 1914. Heart of Darkness: The Congo Free State King Leopold II’s reign in the Congo became an international scandal due to large-scale mistreatment of the indigenous peoples, including frequent mutilation and murder of men, women, and children to enforce rubber production quotas. Colonization of the Congo Belgian exploration and administration took place from the 1870s until the 1920s. It was first led by Sir Henry Morton Stanley, who explored under the sponsorship of King Leopold II of Belgium. As Europe industrialized, its need for rubber dramatically increased. A seemingly endless grove of rubber trees existed throughout Congo, and Leopold wanted it. Leopold saw the Congo as a source of unlimited wealth, particularly in the form of rubber. He procured the region by convincing the European community that he was involved in humanitarian and philanthropic work. Leopold formally acquired rights to the Congo territory at the Conference of Berlin in 1885 and made the land his private property. On May 29, 1885, the king named his new colony the Congo Free State; it could not have been more of a misnomer for the Congolese. Under Leopold, they would be anything but free. Leopold extracted ivory, rubber, and minerals in the upper Congo basin for sale on the world market, without much actual concern for the human inhabitants of the land, even though his alleged purpose in the region was to uplift the local people and develop the area. Administration of the Congo Free State Beginning in the mid-1880s, Leopold first decreed that the state asserted rights of proprietorship over all vacant lands throughout the Congo territory. Leopold used the title “Sovereign King” as ruler of the Congo Free State. He appointed the heads of the three departments of state: interior, foreign affairs, and finances. These positions were, naturally, filled by Belgians who understood little about the Congolese people. As the self-installed ruler, Leopold pledged to suppress the east African slave trade; promote humanitarian policies; guarantee free trade within the colony; impose no import duties for twenty years; and encourage philanthropic and scientific enterprises. In three successive decrees, Leopold promised the rights of the Congolese in their land to native villages and farms, essentially making nearly all the Congo Free State state-owned land. And, the colonial administration initially liberated thousands of slaves. Shortly after the anti-slavery conference he held in Brussels in 1889, Leopold issued a new decree which said that Africans could only sell their harvested products (mostly ivory and rubber) to the government parts of the Free State. Suddenly, the only market Congolese people had for their products was in Belgium, which could set purchase prices and, therefore, control the amount of income the Congolese could receive for their work. Human Rights Abuses The Force Publique, Leopold’s private army, was used to enforce the rubber quotas. The Force Publique’s officer corps included only white Europeans. On arriving in the Congo, the officers recruited soldiers from Zanzibar and west Africa, and eventually from the Congo itself. Many of the black soldiers were from far-off peoples of the Upper Congo, while others had been kidnapped in raids on villages in their childhood and brought to Roman Catholic missions, where they received a military training in conditions close to slavery. Armed with modern weapons and the chicotte—a whip made of hippopotamus hide—the Force Publique routinely took and tortured hostages, slaughtered families of rebels, and flogged and raped Congolese people. They also burned non-submissive villages, and above all, cut off the hands of Congolese natives, including children. In addition, Leopold encouraged the slave trade among Arabs in the Upper Congo in return for slaves to fill the ranks of the Force Publique. During the 1890s, the agency’s primary role was to exploit the natives as laborers to promote the rubber trade, essentially continuing the practice of slavery. Failure to meet the rubber collection quotas was punishable by death. Meanwhile, the Force Publique was required to provide the hands of their victims as proof that they had used their bullets, which were imported from Europe at considerable cost. Sometimes the hands were collected by the soldiers, and sometimes by the villagers themselves. One junior European officer described a raid to punish a village that had protested. The European officer in command “ordered us to cut off the heads of the men and hang them on the village palisades… and to hang the women and the children on the palisade in the form of a cross.” After seeing a Congolese person killed for the first time, a Danish missionary wrote, “The soldier said ‘Don’t take this to heart so much. They kill us if we don’t bring the rubber. The Commissioner has promised us if we have plenty of hands he will shorten our service.’” Leopold’s reign in the Congo became infamous because of the severe persecution and abuse of the Congolese. From 1885 – 1908, millions of Congolese died because of exploitation and disease. In some areas, the population declined dramatically due to diseases such as sleeping sickness and smallpox. A government commission later concluded that the population of the Congo was “reduced by half” during this period, but no accurate records exist. When news of Leopold’s policies and practices in the Congo Free State reached news outlets, the world stood outraged. Calls were issued to have Leopold stripped of his colonial possession. Instead, Belgium’s parliament annexed the Congo Free State and took over its administration on November 15, 1908. It became the Belgian Congo. Enter the French: Colonial Overlords of North and West Africa The French began their colonization efforts before the Scramble for Africa. During the mid-1800s, they launched exploration through Africa and Asia. With increasing rivalry with their Western European nations (particularly England and later, Germany) France began colonizing territory in earnest during the late 1800s. As a result, vast regions in both Asia and Africa came under French control. French West Africa As the French pursued their part in the Scramble for Africa in the 1880s and 1890s, they conquered large territory in the north and west of Africa. These conquered areas were usually governed by French Army officers and dubbed “Military Territories.” In 1895, the French created the colony of French West Africa. The colony consisted of Mauritania, Senegal, French Sudan (now Mali), French Guinea (now Guinea), Côte d’Ivoire, Upper Volta (now Burkina Faso), Dahomey (now Benin), and Niger. The Maghreb The French also focused their attention on colonizing much of Northern Africa, known as the Mahgreb. The region (present-day Algeria, Libya, Morocco, and Tunisia) bordered the Sahara Desert, but also the Mediterranean Sea. As such, the region seemed defendable and gave the French access to the most important sea trade route in Europe—the Mediterranean. Ideally, this meant that the French could exploit resources from their African colonies, as well as quickly and efficiently transport the goods by water to Europe. French Algeria Of all the French colonies in Africa, Algeria proved the most significant. France and Algeria had a long history of trade, and the capital city, Algiers, was a wealthy city situated conveniently on the Mediterranean Sea. It had been governed by a ruler appointed from the Turkish army for centuries, but the indigenous Berber people had remained independent. Since the late 1700s, olive oil, grain, and other foods had poured into France from Algiers. Moreover, the city prospered from extensive trade of beautiful carpets, among other luxury goods, throughout the Mediterranean. From the French perspective, the city was the ultimate prize. In 1830, France launched a campaign to claim Algiers. However, they severely underestimated the resistance they would encounter in Algeria. Arab and Berber clans united against the French invasion. Rallying under a popular commander, the Berber and Arab troops fought fiercely, with thousands of casualties on both sides. But by the 1870s, the French had conquered Algeria. Settlers poured into the colony, and seized Algerian vineyards, farms, and crops. Initially, France prospered from possessing Algeria. However, underground resistance remained strong throughout the French rule. Violence exploded between the French colonizers and the Berbers and Arabs. Within a century, French Algeria would collapse, and a fiercely independent Algeria would rise out of the Sahara. French Colonial Practices Assimilation was one of the ideological hallmarks of French colonial policy in the 19th and 20th centuries. In contrast with British imperial policy, it maintained that natives of French colonies were considered French citizens with full citizenship rights, as long as they adopted French culture and customs. Colonial Assimilation A hallmark of the French colonial project in the late 19th century and early 20th century was the civilizing mission, the principle that it was Europe’s duty to bring civilization to “backward” people. Rather than merely govern colonial populations, the Europeans would attempt to Westernize them in accordance with a colonial ideology known as “assimilation,” which was meant to make the colonized act and think like the colonizers. France pursued a policy of assimilation throughout much of its colonial empire. In contrast with British imperial policy, the French taught their subjects that by adopting French language and culture, they could eventually become French. Natives of these colonies were considered French citizens as long as French culture and customs were adopted. And adoption of French customs was supposed to ensure the rights and duties of French citizens. French conservatives denounced the assimilationist policies as products of a dangerous liberal fantasy. Unlike in Algeria, Tunisia, and French West Africa, in the Protectorate of Morocco the French administration attempted to use segregationist urban planning and colonial education to prevent cultural mixing and uphold the traditional society upon which the French depended for collaboration, with mixed results. After World War II, the segregationist approach modeled in Morocco had been discredited and assimilationism enjoyed a brief revival. A Young Country's Quick Colonial Rise: The German Colonies German Chancellor Otto von Bismarck strongly opposed the notion of overseas colonies. He predicted rivalry, unnecessary violence, and competition. However, following his retirement from office, German politics assumed a different course. A “keep up or be left behind mentality” consumed the German public. Pressure to establish colonies for international prestige exploded. By the late 1800s, Germany had joined the Scramble for Africa, citing the need for resources to fuel its factories that emerged during the Second Industrial Revolution. Background: Kaiser Wilhelm II and Weltpolitik In 1891, Kaiser Wilhelm II of Germany made a decisive break with former Realpolitik of Bismarck and established Weltpolitik. The aim of Weltpolitik was to transform Germany into a global power through aggressive diplomacy, the acquisition of overseas colonies, and the development of a large navy. The origins of the policy can be traced to a Reichstag debate in December 1897 during which German Foreign Secretary Bernhard von Bülow stated, “in one word: We wish to throw no one into the shade, but we demand our own place in the sun.” Acquisition of Colonies The rise of German imperialism and colonialism coincided with the latter stages of the Scramble for Africa. Initially, German individuals, rather than government entities, competed with other already established colonies and colonialist entrepreneurs. With the Germans joining the race for the last uncharted territories in Africa and in the Pacific, competition for colonies involved major European nations and several lesser powers. The German effort included the first commercial enterprises in the 1850s and 1860s in West Africa, East Africa, the Samoan Islands, and the unexplored north-east quarter of New Guinea with adjacent islands. German traders and merchants began to establish themselves in the African Cameroon delta and the mainland coast across from Zanzibar. Large African inland acquisitions followed, mostly to the detriment of native inhabitants. All in all, German colonies comprised territory that makes up 22 countries today, mostly in Africa, including Nigeria, Ghana, and Uganda. However, their most significant African colony in the early twentieth century was Tanzania, in east Africa. The Herero and Nama Genocide The Herero and Nama genocide was a campaign of racial extermination that the German Empire undertook in their colony of German South-West Africa (modern-day Namibia) against the Herero and Nama peoples. It is considered one of the first genocides of the 20th century. During the 17th and 18th centuries, the Herero migrated to what is today Namibia established themselves as herdsmen. In the beginning of the 19th century, the Nama from South Africa, who already possessed some firearms, entered the land and were followed by white merchants and German missionaries. During the late 19th century, the first Europeans arrived to permanently settle the land. Primarily in Damaraland, German settlers acquired land from the Herero to establish farms. In 1883, merchant Franz Adolf Eduard Lüderitz entered into a contract with the native elders. The exchange later became the basis of German colonial rule. The territory became a German colony under the name of German Southwest Africa. Soon after, conflicts between the German colonists and the Herero herdsmen began; these were frequently disputes about access to land and water but were also fueled by the legal discrimination that white immigrants inflicted on the native population. Additionally, the numerous mixed offspring—children of partial German heritage—upset the German colonial administration, which was concerned with maintaining “racial purity.” Between 1893 and 1903, the Herero and Nama people’s land and cattle were progressively making their way into the hands of the German colonists. In 1903, the Herero people learned that they were to be placed in reservations, leaving more room for colonists to own land and prosper. In 1904, the Herero and Nama began a large rebellion that lasted until 1907, ending with the near destruction of the Herero people. What followed in 1907 is argued by some historians as the first genocide of the 20th century. The Germans sought to eliminate the Herero and Nama people by driving them to the Namib desert at the point of a rifle or maxim gun. Once defeated, thousands of Herero and Nama were imprisoned in concentration camps, where the majority died of disease, abuse, and exhaustion. During the “war” against the Herero and Nama peoples, Eugen Fischer, a German scientist, came to the concentration camps to conduct medical experiments on race, using children of Herero people and mulatto children of Herero women and German men as test subjects. Together with Theodor Mollison he also experimented upon Herero prisoners. Those experiments included sterilization and injection of smallpox, typhus, and tuberculosis. Roughly 80,000 Herero lived in German Southwest Africa at the beginning of Germany’s colonial rule over the area, while after their revolt was defeated, they numbered approximately 15,000. In a period of four years, 1904 – 1907, approximately 65,000 Herero and 10,000 Nama people perished. England's Grasp on Africa England, just like other Western European nations, jumped feet first into the Scramble for Africa. Like its counterparts, the tiny island nation was eager to assert its dominance on the world stage. Tragically for the African people, particularly in South Africa, the British engaged in colonization exactly as described by the poet Hilaire Belloc: “Whatever happens we have got, the maxim-gun, and they have not.” Drastically superior military technology, such as the maxim-gun and the breech-loading rifle, would determine who reigned victorious in the conquest of Africa. The British did not establish as large of colonies in Africa as the French. They did, however, procure extremely prosperous colonies in West Africa. Notably, the British colonized Nigeria, and the Gold Coast (present-day Ghana). Both colonies were wealthy in resources coveted by the British. Nigeria was a sprawling, subtropical country rich in plant diversity. Notably, it was home to extensive groves of palm trees. Under British rule, the palm oil industry increased a thousand-fold. Palm oil was transformed into a commodity in European life because of its uses in soaps, and as a lubricant for heavy machinery. The Royal Niger Company was established and owned by the British, giving them a virtual monopoly on palm oil. Moreover, the Nigerian coast opened to the Atlantic Ocean, giving the British easy access in global trade and shipping. Although the British officially colonized the capital city of Nigeria—Lagos —in the 1880s, it was not until the late 1890s and early 1900s that they were able to secure the rest of present-day Nigeria. Like the French and Germans, the British relied on force to subdue the Nigerian populations who resisted them. Fierce fighting erupted between the Nigerian resistance and the British, who also used Nigerian soldiers in their ranks. To overcome the Nigerian forces, the British used heavy artillery and columns of machine-gunners. One by own, towns throughout Nigeria fell to the British because of heavy bombardment. While the British hammered and suppressed the population in Nigeria, they also had to contend with resistance in their other wealthy, West African colony: the Gold Coast. Located in present-day Ghana, it was, perhaps, the most aptly named of all colonies. Significant gold deposits could be found throughout the colony. In the 1870s, Dutch and Danish companies had sold out to the British, allowing Britain to declare the Gold Coast a colony. Like Nigeria, its coast opened to the Atlantic giving the British a significant advantage in the trading and shipping of gold. Similarly, the British also faced threats of Ghanian resistance. They countered those threats with the use of excessive force, including artillery and machine guns. British Rule in Egypt Throughout the 19th century, the ruling dynasty of Egypt spent exorbitant amounts of money on infrastructural development. Consequently, despite vast sums of European and other foreign capital, actual economic production and revenue were insufficient to repay the loans. Egypt was bankrupt. As a result, European and foreign financial agencies were able to take control of the treasury of Egypt; they forgave debt in return for taking control of the Suez Canal, as well as reoriented economic development. By 1882, Islamic and Arabic Nationalist opposition to the colonizers began growing in Egypt, which was the most powerful, populous, and influential of Arab countries. A large military demonstration in September 1881 forced the resignation of the Egyptian Prime Minister. Many of the Europeans retreated to specially designed quarters suited for defense or heavily European settled cities, such as Alexandria. By June 1882 a fight for control of Egypt erupted between the Europeans and the Arab Nationalists. Anti-European violence broke out in Alexandria, prompting a British naval bombardment of the city. Later, a coalition force of British, French, and Indian troops easily defeated the nationalist Egyptian Army in September and took control of the country. With European aid, the Egyptian royal family remained in control of the country. But the control was reliant on the military and political aid of Western Europe, especially Britain. It is unlikely that the British expected a long-term occupation from the outset; however, Lord Cromer, Britain’s Chief Representative in Egypt at the time, viewed Egypt’s financial reforms as part of a long-term objective. Cromer took the view that political stability needed financial stability, and he embarked on a program of long-term investment in Egypt’s agricultural revenue sources, the largest of which was cotton. To accomplish this, Cromer worked to improve the Nile’s irrigation system through multiple large projects: the construction of the Aswan Dam, the creation of the Nile Barrage, and an increase of canals available to agricultural-focused lands. During British occupation and control, Egypt developed into a regional commercial and trading destination. Immigrants from less-stable parts of the region—including Greeks, Jews and Armenians—began to flow into Egypt. The number of foreigners in the country rose from 10,000 in the 1840s to around 90,000 in the 1880s and more than 1.5 million by the 1930s. South Africa and the Boer Wars Large-scale war was perhaps, inevitable, in South Africa, following the discovery of both gold and diamonds in the region, given the mindset of Europeans. During the late 19th and early 20th centuries, ethnic, political, and social tensions among European colonial powers and indigenous Africans, as well as English and Dutch settlers, led to open conflict in a series of wars and revolts between 1879 and 1915, most notably the First and Second Boer Wars. First Boer War The First Boer War was fought from December 1880 until March 1881 and was the first clash between the British and the South African Republic Boers—the Dutch and Huguenot peoples who had settled southern Africa in the late 17th century. The British, having won a war against the Zulus, attempted to impose an unpopular system of confederation in South Africa. This resulted in outrage and strong protests from Boers. In December 1880, 5,000 Boers assembled at a farm to discuss a course of action. Tired of the British treating them as second-class citizens, as well as their demands on Boer agricultural production and taxation, the Boers decided to create an independent republic within South Africa. On December 13 they proclaimed their independence and intent to establish a republican government. This resulted in war erupting between the two sides. Surprisingly, the British suffered several significant, military defeats during the First Boer War. As a result, the British government signed a truce on March 6. And in the final peace treaty on March 23, 1881 Britain gave the Boers self-government in a small part of South Africa known as the South African Republic (Transvaal), under a theoretical British oversight. Second Boer War The exact causes of the Second Boer War in 1899 have been disputed ever since the events took place. The Boers felt that the British intention was to again annex the Transvaal. Some feel that the British were coerced into war by the wealthy owners of the mining industries; others that the British government underhandedly created conditions that allowed the war to ignite. The British worried about popular support for the war and wanted to push the Boers to make the first move toward actual hostilities; this occurred when the Transvaal issued an ultimatum on October 9 for the British to withdraw all troops from their borders, or they would “regard the action as a formal declaration of war.” The Second Boer War took place from October 11, 1899 until May 31, 1902. The war was fought between the British Empire and the two independent Boer republics of the Orange Free State and the South African Republic (referred to as the Transvaal by the British). After a protracted, hard-fought war, the two independent republics lost and were absorbed into the British Empire. The Boers fought bitterly against the British, refusing to surrender for years despite defeat. They reverted to guerrilla warfare. As guerrillas without uniforms, the Boer fighters easily blended into the farmlands, which provided hiding places, supplies, and horses. The British solution was to set up complex nets of block houses, strong points, and barbed wire fences, partitioning off the entire conquered territory. The civilian farmers were relocated into concentration camps, where very large proportions died of disease, especially the children, mostly due to weak immunities. In all, the war cost around 75,000 lives: 22,000 British soldiers (7,792 battle casualties, the rest through disease); 6,000 – 7,000 Boer Commandos; 20,000 – 28,000 Boer civilians (mostly women and children due to disease in concentration camps); and an estimated 20,000 black Africans, both Boer and British allies alike. The last of the Boers surrendered in May 1902. The war resulted in the creation of the Transvaal Colony, which in 1910 was incorporated into the Union of South Africa. And the treaty ended the existence of the South African Republic and the Orange Free State as Boer republics, placing them within the British Empire. Significance The Scramble for Africa was, indeed, a scramble. A mad-house, free-for-all in which European countries hurried to colonize territory in Africa for two purposes: natural resources and human labor. Additionally, this event may have occured as a show of strength amidst increasingly rivalrous, nationalist European nations. Indeed, this mad period of colonization would emerge as one of the underlying causes of World War I. Britain, France, and Germany (all major combatant nations in World War I) proved the most successful in colonizing Africa. However, Italy, Spain, and Portugal also colonized, or attempted to colonize, parts of Africa. By 1914 when World War I began, only two independent countries remained in all of Africa: Ethiopia—which the Italians had tried to colonize, and Liberia—a country established for freed slaves by the United States. Across Africa, it was the many different African people who lost in the Scramble for Africa. Across the board, Europeans regularly used excessive military force to subdue resistant civilians. African cultures, languages, land, and livelihoods were all suppressed or destroyed during the Scramble for Africa. Moreover, Africans lost their chance to modernize, just as many African nations had begun the process of modernizing politically, economically, and industrially. In terms of sheer numbers, the colony which endured the worst human rights abuses was the Congo Free State under King Leopold II of Belgium. Treatment of the Congolese people by the Force Publique and other agencies was so brutal and heinous that it sparked uproar from the international community. Estimates suggest that nearly 10 million Congolese died during the period of the Congo Free State. That figure is nearly as high as the total deaths of the Holocaust (estimated 12 million), a fact largely ignored or forgotten by much of the world. Attributions Images courtesy of Wikimedia Commons Boundless World History “The Berlin Conference” https://courses.lumenlearning.com/boundless-worldhistory/chapter/the-berlin-conference/ https://creativecommons.org/licenses/by-sa/4.0/ “The Belgian Congo” https://courses.lumenlearning.com/boundless-worldhistory/chapter/the-belgian-congo/ https://creativecommons.org/licenses/by-sa/4.0/ “France in Africa” https://courses.lumenlearning.com/boundless-worldhistory/chapter/france-in-africa/ https://creativecommons.org/licenses/by-sa/4.0/ “German Imperialism” https://courses.lumenlearning.com/boundless-worldhistory/chapter/german-imperalism/ https://creativecommons.org/licenses/by-sa/4.0/ “Africa and the United Kingdom” https://courses.lumenlearning.com/boundless-worldhistory/chapter/africa-and-the-united-kingdom/ https://creativecommons.org/licenses/by-sa/4.0/ Boahen, A. Abu. African Perspectives on Colonialism. Johns Hopkins University Press, 1987. 1-26. Shillington, Kevin. History of Africa. 3rd Ed. Palgrave MacMillan, 2012. 281-282; 287 288; 319-321.
oercommons
2025-03-18T00:35:09.626126
Neil Greenwood
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/87943/overview", "title": "Statewide Dual Credit World History, European Imperialism and Crises 1871-1919 CE, Chapter 10: Enlightenment and Colonization, Colonization of Africa", "author": "Anna McCollum" }
https://oercommons.org/courseware/lesson/88080/overview
Chinese Revolution Overview Introduction Shortly after the conclusion of World War II, the Chinese Communist Party seized power in China in 1949. Under the leadership of the party dictator, Mao Zedong, the Communists in China developed their own version of Marxist-Leninism in the 1950s and 1960s, and eventually challenged the Soviet Union for leadership of the worldwide Communist Revolution. Learning Objectives - Assess how the conflict between the Nationalist Party (Kuomintang) and the Chinese Communist Party was affected by external and internal developments in China. - Identify factors that contributed to the Chinese Communist Party's victory in the Civil War. - Examine the economic, political, and cultural changes resulting from the Chinese Revolution. - Examine the Nationalist Party in the Chinese Revolution and settlement in Taiwan. Key Terms / Key Concepts Chinese Civil War: a civil war in China fought between forces loyal to the Kuomintang (KMT)-led government of the Republic of China, and forces loyal to the Communist Party of China (CPC) (The war began in August 1927 with Generalissimo Chiang Kai-shek’s Northern Expedition and ended when major hostilities ceased in 1950.) Five Year Plan: a nationwide centralized economic plan in the Soviet Union developed by a state planning committee that was part of the ideology of the Communist Party for the development of the Soviet economy (A series of these plans was developed in the Soviet Union while similar Soviet-inspired plans emerged across other communist countries during the Cold War era.) Great Chinese Famine: a period in the People’s Republic of China between the years 1959 and 1961 characterized by widespread famine that resulted in deaths ranging from 20 million to 43 million (Drought, poor weather, and the policies of the Communist Party of China (Great Leap Forward) contributed, although the relative weights of these contributions are disputed.) Great Leap Forward: an economic and social campaign by the Communist Party of China (CPC) that took place from 1958 to 1961 and was led by Mao Zedong aimed at rapidly transforming the country from an agrarian economy into a socialist society through quick industrialization and collectivization (It is widely considered to have caused the Great Chinese Famine.) Hundred Flowers Campaign: a period in 1956 in the People’s Republic of China during which the Communist Party of China (CPC) encouraged its citizens to openly express their opinions of the communist regime (Differing views and solutions to national policy were encouraged based on the famous expression by Communist Party Chairman Mao Zedong: “The policy of letting a hundred flowers bloom and a hundred schools of thought contend is designed to promote the flourishing of the arts and the progress of science.” After this brief period of liberalization, Mao abruptly changed course.) Khrushchev’s “Secret Speech”: a report by Soviet leader Nikita Khrushchev made to the 20th Congress of the Communist Party of the Soviet Union on February 25, 1956 in which Khrushchev was sharply critical of the reign of deceased General Secretary and Premier Joseph Stalin, particularly with respect to the purges which marked the late 1930s Kuomintang: a major political party in the Republic of China founded by Song Jiaoren and Sun Yat-sen shortly after the Xinhai Revolution of 1911; currently the second-largest political party in the country, often translated as the Nationalist Party of China or Chinese Nationalist Party (Its predecessor, the Revolutionary Alliance, was one of the major advocates of the overthrow of the Qing Dynasty and the establishment of a republic.) Maoism: a political theory derived from the teachings of Chinese political leader Mao Zedong (1893 – 1976); developed from the 1950s until the Deng Xiaoping reforms in the 1970s, the guiding political and military ideology of the Communist Party of China (CPC) and revolutionary movements around the world The Chinese Civil War The Chinese Civil War, fought between forces loyal to the Nationalist Kuomintang-led government (KMT) and those loyal to the Communist Party of China (CPC), represented an ideological split between the CPC and the KMT and resulted in the establishment of the People’s Republic of China and the exodus of the nationalists to Taiwan. It continued intermittently until late 1937, when the two parties came together to form the Second United Front to counter the Japanese threat and prevent the country from crumbling. However, the alliance of the CPC and the KMT was in name only. The level of actual cooperation and coordination between the two parties during World War II was at best minimal. In the midst of the Second United Front, the CPC and the KMT still vied for territorial advantage in “Free China” (i.e., areas not occupied by the Japanese or ruled by Japanese puppet governments). In general, developments in the Second Sino-Japanese War were to the advantage of the CPC, as its guerrilla war tactics won them popular support within the Japanese-occupied areas, while the KMT had to defend the country against the main Japanese campaigns since it was the legal Chinese government. Under the terms of the Japanese unconditional surrender dictated by the United States, Japanese troops were ordered to surrender to KMT troops and not to the CPC, which was present in some of the occupied areas. In Manchuria, however, where the KMT had no forces, the Japanese surrendered to the Soviet Union. Chiang Kai-shek ordered the Japanese troops to remain at their posts to receive the Kuomintang and not surrender their arms to the Communists. However, in the last month of World War II in East Asia, Soviet forces launched a huge strategic offensive operation to attack the Japanese Kwantung Army in Manchuria and along the Chinese-Mongolian border. Chiang Kai-shek realized that he lacked the resources to prevent a CPC takeover of Manchuria following the scheduled Soviet departure. A fragile truce between the competing forces fell apart on June 21, 1946, when full-scale war between the CPC and the KMT broke out. On July 20, 1946, Chiang Kai-shek launched a large-scale assault on Communist territory, marking the final phase of the Chinese Civil War. After three years of exhausting military campaigns, on October 1, 1949, Mao Zedong proclaimed the People’s Republic of China, with its capital in Beijing. Chiang Kai-shek and approximately two million Nationalist Chinese retreated from mainland China to the island of Taiwan after the loss of Sichuan (at that time, Taiwan was still Japanese territory). In December 1949, Chiang proclaimed Taipei, Taiwan, the temporary capital of the Republic of China and continued to assert his government as the sole legitimate authority in China. During the war, both the Nationalists and the Communists carried out mass atrocities, with millions of non-combatants deliberately killed by both sides. Benjamin Valentino has estimated atrocities resulted in the deaths of between 1.8 million and 3.5 million people between 1927 and 1949. Atrocities included deaths from forced conscription, as well as massacres. The United States and the Chinese Civil War During World War II, the United States emerged as a major actor in Chinese affairs. As an ally, it embarked in late 1941 on a program of massive military and financial aid to the hard-pressed Nationalist government. In January 1943 the United States and Britain led the way in revising their treaties with China, bringing to an end a century of unequal treaty relations. Within a few months, a new agreement was signed between the United States and China for the stationing of American troops in China for the common war effort against Japan. In December 1943 the Chinese exclusion acts of the 1880s and subsequent laws enacted by the United States Congress to restrict Chinese immigration into the United States were repealed. The wartime policy of the United States was initially to help China become a strong ally and a stabilizing force in postwar East Asia. As the conflict between the Nationalists and the Communists intensified, however, the United States sought unsuccessfully to reconcile the rival forces for a more effective anti-Japanese war effort. Toward the end of the war, United States Marines were used to hold Beiping and Tianjin against a possible Soviet incursion, and logistic support was given to Nationalist forces in north and northeast China. Through the mediatory influence of the United States a military truce was arranged in January 1946, but battles between Nationalists and Communists soon resumed. Realizing that American efforts short of large-scale armed intervention could not stop the war, the United States withdrew the American mission, headed by General George C. Marshall, in early 1947. The civil war, in which the United States aided the Nationalists with massive economic loans but no military support, became more widespread. Battles raged not only for territories but also for the allegiance of cross sections of the population. The Nationalist government sought to enlist popular support through internal reforms. The effort was in vain, however, because of the rampant corruption in government and the accompanying political and economic chaos. By late 1948 the Nationalist position was bleak. The demoralized and undisciplined Nationalist troops proved no match for the People's Liberation Army (PLA). The Communists were well established in the north and northeast. Although the Nationalists had an advantage in numbers of men and weapons, controlled a much larger territory and population than their adversaries, and enjoyed considerable international support, they were exhausted by the long war with Japan and the attendant internal responsibilities. In January 1949 Beiping was taken by the Communists without a fight, and its name changed back to Beijing. Between April and November, major cities passed from Guomindang to Communist control with minimal resistance. In most cases the surrounding countryside and small towns had come under Communist influence long before the cities. After Chiang Kai-shek and a few hundred thousand Nationalist troops fled from the mainland to the island of Taiwan, there remained only isolated pockets of resistance. In December 1949 Chiang proclaimed Taipei, Taiwan, the temporary capital of China. Taiwan or the Republic of China? The resumption of the Chinese Civil War led to the ROC’s loss of the mainland to the Communists and the flight of the ROC government to Taiwan in 1949. The island of Taiwan was mainly inhabited by Taiwanese aborigines before the 17th century, when Dutch and Spanish colonies opened the island to Han Chinese immigration. After a brief rule by the Kingdom of Tungning, the island was annexed by the Qing dynasty, which was the last dynasty of China. The Qing ceded Taiwan to Japan in 1895 after the First Sino-Japanese War. While Taiwan was under Japanese rule, the Republic of China (ROC) was established on the mainland in 1912 after the fall of the Qing dynasty. Following the Japanese surrender to the Allies in 1945, the ROC took control of Taiwan. Although the ROC claimed to be the legitimate government of “all of China” until 1991, its effective jurisdiction since 1949 has been limited to Taiwan and its surrounding islands, with the main island making up 99% of its territory. The official name of the entity remains the Republic of China although its political status is highly ambiguous. The ROC was a charter member of the United Nations. Despite the major loss of territory in 1949 when the People’s Republic of China was established by the Communists, the ROC was still recognized as the legitimate government of China by the UN and many non-communist states. However, in 1971 the UN expelled the ROC and transferred China’s seat to the People’s Republic of China (PRC). In addition, the ROC lost its membership in all intergovernmental organizations related to the UN. Most countries aligned with the West in the Cold War terminated diplomatic relations with the ROC and recognized the PRC instead. The ROC continues to maintain relations with the UN and most of its non-governmental organizations. However, multiple attempts by the Republic of China to rejoin the UN have failed, largely due to diplomatic maneuvering by the PRC. The ROC is recognized a small number of United Nations member states and the Holy See—the Catholic Pope and territories that he governs. It maintains diplomatic relations with those countries, which means they recognize the ROC government as the representative of China but not the independent status of Taiwan as a state. The PRC refuses to maintain diplomatic relations with any nation that recognizes the ROC, but does not object to nations conducting economic, cultural, and other exchanges with Taiwan that do not imply diplomatic relations. Therefore, many nations that have diplomatic relations with Beijing maintain quasi-diplomatic offices in Taipei. Similarly, the government in Taiwan maintains quasi-diplomatic offices in most nations under various names, most commonly as the Taipei Economic and Cultural Office. The ROC participates in most international forums and organizations under the name “Chinese Taipei” due to diplomatic pressure from the People’s Republic of China. For instance, it has competed at the Olympic Games under this name since 1984. Taiwan's Political System Taiwan is currently the 21st-largest economy in the world, and its high-tech industry plays a key role in the global economy. It is ranked highly in terms of freedom of the press, health care, public education, economic freedom, and human development. This status was not always the case in the history of Taiwan. On February 28, 1947, an anti-government uprising in Taiwan was violently suppressed by the Kuomintang-led ROC government, which killed thousands of civilians. The massacre, known as the February 28 Incident, marked the beginning of the Kuomintang’s White Terror period in Taiwan, in which tens of thousands more inhabitants vanished, died, or were imprisoned. The White Terror, in its broadest meaning, was the period of martial law that lasted for 38 years and 57 days. Chiang Ching-kuo—Chiang Kai-shek’s son and successor as the president—began to liberalize the political system in the mid-1980s. In 1984, the younger Chiang selected Lee Teng-hui—a Taiwanese-born, US-educated technocrat—to be his vice president. In 1986, the Democratic Progressive Party (DPP) was formed and inaugurated as the first opposition party in the ROC to counter the KMT. A year later, Chiang Ching-kuo lifted martial law on the main island of Taiwan. After the death of Chiang Ching-kuo in 1988, Lee Teng-hui succeeded him as president and continued to democratize the government. Under Lee, Taiwan underwent a process of localization in which Taiwanese culture and history were promoted over a pan-China viewpoint, in contrast to earlier KMT policies that promoted a Chinese identity. The original members of the Legislative Yuan and National Assembly, elected in 1947 to represent mainland Chinese constituencies and holding the seats without re-election for more than four decades, were forced to resign in 1991. The previously nominal representation in the Legislative Yuan was thus brought to an end, reflecting the reality that the ROC had no jurisdiction over mainland China and vice versa. Democratic reforms continued in the 1990s, with Lee Teng-hui being re-elected in 1996, during the first direct presidential election in the history of the ROC. By the same token, Taiwan transformed from a one-party military dictatorship dominated by the Kuomintang to a multi-party democracy with universal suffrage. Although Taiwan is fully self-governing, most international organizations either refuse it membership or allow it to participate only as a non-state actor. Internally, the major division in politics is between the aspirations of eventual Chinese unification or Taiwanese independence, although both sides have moderated their positions to broaden their appeal. The PRC has threatened the use of military force in response to any formal declaration of independence by Taiwan or if PRC leaders decide that peaceful unification is no longer possible. Cross-Strait Relations The English expression “cross-strait relations” refers to relations between the PRC and the ROC by the two sides concerned and many observers, so that the relationship between China and Taiwan would not be referred to as “(Mainland) China–Taiwan relations” or “PRC–ROC relations.” The Chinese Civil War stopped without signing a peace treaty, and the two sides are technically still at war. Since 1949, relations between the PRC and the ROC have been characterized by limited contact, tensions, and instability. In the early years, military conflicts continued while diplomatically both governments competed to be the “legitimate government of China.” On January 1, 1979, Beijing proposed the establishment of the so called Three Links: postal, commercial, and transportation. The proposal was greeted in ROC’s President Chiang Ching-kuo’s with the Three-Nos Policy (“no contact, no compromise and no negotiation”). In 1987, the ROC government began to allow visits to China. This benefited many, especially old KMT soldiers who had been separated from their families in China for decades. This also proved a catalyst for the thawing of relations between the two sides, although difficult negotiations continued and the Three Links were officially established only in 2008. Cross-strait investments have greatly increased since 2008. Predominantly, this involves Taiwan-based firms moving to or collaborating in joint ventures in the PRC. China remains Taiwan’s top trading partner. Cultural exchanges have also increased in frequency. The National Palace Museum in Taipei and the Palace Museum in Beijing have collaborated on exhibitions. Scholars and academics frequently visit institutions on the other side. Books published on each side are regularly republished on the other side, although restrictions on direct imports and different orthography somewhat impede the exchange of books and ideas. Religious exchange has also become frequent. Frequent interactions occur between worshipers of Matsu and Buddhists. Maoism The ideologies of the Chinese Communist Party in mainland China have significantly evolved since it established political power in China in 1949. Mao Zedong’s revolution that founded the PRC was nominally based on Marxism-Leninism with a rural focus (based on China’s social situations at the time). During the 1960s and 1970s, the CPC experienced a significant ideological breakdown with the Communist Party of the Soviet Union and their allies. Mao’s peasant revolutionary vision and so-called “continued revolution under the dictatorship of the proletariat” stipulated that class enemies continued to exist even though the socialist revolution seemed to be complete, giving way to the Cultural Revolution. This fusion of ideas became known officially as Mao Zedong Thought or Maoism outside of China. It represented a powerful branch of communism that existed in opposition to the Soviet Union’s Marxist revisionism. The essential difference between Maoism and other forms of Marxism is that Mao claimed that peasants should be the essential revolutionary class in China because they were more suited than industrial workers to establish a successful revolution and socialist society in China. Maoism was widely applied as the guiding political and military ideology of the CPC. It evolved with Chairman Mao’s changing views, but its main components are “New Democracy”, “People’s war”, “Mass line”, “cultural revolution”, “three worlds”, and “agrarian socialism”. The “New Democracy” aims to overthrow feudalism and achieve independence from colonialism. However, it dispenses with the rule predicted by Marx and Lenin that a capitalist class would usually follow such a struggle, claiming instead to enter directly into socialism through a coalition of classes fighting the old ruling order. The original symbolism of the flag of China derives from the concept of the coalition. The largest star symbolizes the Communist Party of China’s leadership, and the surrounding four smaller stars symbolize “the bloc of four classes”: proletarian workers, peasants, the petty bourgeoisie (small business owners), and the nationally-based capitalists. This is the coalition of classes for Mao’s New Democratic Revolution. Maoism emphasizes the “revolutionary struggle of the vast majority of people against the exploiting classes and their state structures,” which Mao termed “People’s war.” The “People’s war” maintains that “Political power grows out of the barrel of a gun.” Mobilizing large parts of rural populations to revolt against established institutions by engaging in guerrilla warfare, Maoism focuses on “surrounding the cities from the countryside.” It views the industrial-rural divide as a major division exploited by capitalism, involving industrial urban developed “First World” societies ruling over rural developing “Third World” societies. The” Mass line” theory holds that the communist party must not be separate from the popular masses, either in policy or in revolutionary struggle. This theory runs contrary to the view of Lenin and the Bolsheviks in the Russian Revolution that the intellectual elite in the party lead the masses. To conduct a successful revolution, according to Maoism, the needs and demands of the masses must be paramount. The “Cultural revolution” theory states that the proletarian revolution and the dictatorship of the proletariat does not wipe out bourgeois ideology. The class struggle continues, and even intensifies, during socialism. Therefore, a constant struggle against these ideologies and their social roots must be conducted. The revolution’s stated goal was to preserve “true” Communist ideology in the country by purging remnants of capitalist and traditional elements from Chinese society, and to re-impose Maoist thought as the dominant ideology within the Party. The concept was applied in practice in 1966, which marked the return of Mao Zedong to a position of power after the Great Leap Forward (a 1958 – 1961 failed economic and social campaign aimed to rapidly transform the country from an agrarian economy into a socialist society through rapid industrialization and collectivization). The movement paralyzed China politically and negatively affected the country’s economy and society to a significant degree. The “Three Worlds” theory states that during the Cold War, two imperialist states formed the First World: the United States and the Soviet Union. The Second World consisted of the other imperialist states in their spheres of influence. The Third World consisted of the non-imperialist countries. Both the First and the Second World exploit the Third World, but the First World more aggressively so. In its concept of “agrarian socialism”, Maoism departs from conventional European-inspired Marxism in that its focus is on the agrarian countryside rather than the industrial urban forces. This is known as agrarian socialism. Although Maoism is critical of urban industrial capitalist powers, it views urban industrialization as a prerequisite to expand economic development and socialist reorganization to the countryside, with the goal of rural industrialization that would abolish the distinction between town and countryside. The People's Republic of China On October 1, 1949, the People's Republic of China was formally established, with its national capital at Beijing. "The Chinese people have stood up!" declared Mao as he announced the creation of a "people's democratic dictatorship." The people were defined as a coalition of four social classes: the workers, the peasants, the petite bourgeoisie, and the national-capitalists. The four classes were to be led by the CCP, which was meant to be the vanguard of the working class. At that time the CCP claimed a membership of 4.5 million, of which members of peasant origin accounted for nearly 90 percent. The party was under Mao's chairmanship, and the government was headed by Zhou Enlai (1898 – 1976) as premier of the State Administrative Council (the predecessor of the State Council). The Soviet Union recognized the People's Republic on October 2, 1949. Earlier in the year, Mao had proclaimed his policy of "leaning to one side" as a commitment to the socialist bloc. In February 1950, after months of hard bargaining, China and the Soviet Union signed the Treaty of Friendship, Alliance, and Mutual Assistance, valid until 1980. The pact also was intended to counter Japan or any power's joining Japan for the purpose of aggression. In the first year of Communist administration, moderate social and economic policies were implemented with skill and effectiveness. For the first time in decades a Chinese government was met with peace, instead of massive military opposition, within its territory. The new leadership was highly disciplined and, having a decade of wartime administrative experience to draw on, was able to embark on a program of national integration and reform. The leadership realized that the overwhelming task of economic reconstruction and achievement of political and social stability required the goodwill and cooperation of all classes of people. Results were impressive by any standard, and popular support was widespread. By 1950 international recognition of the Communist government had increased considerably, but it was slowed by China's involvement in the Korean War. In October 1950, sensing a threat to the industrial heartland in northeast China from the advancing United Nations (UN) forces in the Democratic People's Republic of Korea (North Korea), units of the PLA—calling themselves the Chinese People's Volunteers—crossed the Yalu Jiang River into North Korea in response to North Korea's and the Soviet Union’s request for aid. Almost simultaneously the PLA forces also marched into Xizang (Tibet) to reassert Chinese sovereignty over a region that had been in effect independent of Chinese rule since the fall of the Qing dynasty in 1911. In 1951 the UN declared China to be an aggressor in Korea and sanctioned a global embargo on the shipment of arms and war material to China. This step foreclosed any possibility that the People's Republic might replace Nationalist China (on Taiwan) as a member of the UN and as a veto-holding member of the UN Security Council, at least for the time being. After China entered the Korean War, the initial moderation in Chinese domestic policies gave way to a massive campaign against the "enemies of the state," actual and potential. These enemies consisted of "war criminals, traitors, bureaucratic capitalists, and counterrevolutionaries." The campaign was combined with party sponsored trials attended by huge numbers of people. The major targets in this drive were foreigners and Christian missionaries who were branded as United States agents at these mass trials. The 1951 – 52 drive against political enemies was accompanied by land reform, which had actually begun under the Agrarian Reform Law of June 28, 1950. The redistribution of land was accelerated, and a class struggle against landlords and wealthy peasants was launched. An ideological reform campaign requiring self-criticisms and public confessions by university faculty members, scientists, and other professional workers was given wide publicity. Artists and writers were soon the objects of similar treatment for failing to heed Mao's dictum that culture and literature must reflect the class interest of the working people, led by the CCP. These campaigns were accompanied in 1951 and 1952 by the san fan ("three anti") and wu fan ("five anti") movements. The former was directed ostensibly against the evils of "corruption, waste, and bureaucratism"; its real aim was to eliminate incompetent and politically unreliable public officials and to bring about an efficient, disciplined, and responsive bureaucratic system. The wu fan movement aimed at eliminating recalcitrant and corrupt businessmen and industrialists, who were in effect the targets of the CCP's condemnation of "tax evasion, bribery, cheating in government contracts, thefts of economic intelligence, and stealing of state assets." In the course of this campaign the party claimed to have uncovered a well-organized attempt by businessmen and industrialists to corrupt party and government officials. This charge was enlarged into an assault on independent businesspeople (the “bourgeoisie”) as a whole. The number of people affected by the various punitive or reform campaigns was estimated in the millions. The Transition to Socialism, 1953-1957 The period of officially designated "transition to socialism" corresponded to China's First Five-Year Plan(1953 – 57). The period was characterized by efforts to achieve industrialization, collectivization of agriculture, and political centralization. The First Five-Year Plan stressed the development of heavy industry on the Soviet model. Soviet economic and technical assistance was expected to play a significant part in the implementation of the plan, and technical agreements were signed with the Soviets in 1953 and 1954. To facilitate economic planning, the first modern census was taken in 1953; the population of mainland China was shown to be 583 million, a figure far greater than had been anticipated. Therefore, among China's most pressing needs in the early 1950s were food for its burgeoning population, domestic capital for investment, and purchase of Soviet-supplied technology, capital equipment, and military hardware. To satisfy these needs, the government began to collectivize agriculture. Despite internal disagreement as to the speed of collectivization, which at least for the time being was resolved in Mao's favor, preliminary collectivization was 90 percent completed by the end of 1956. In addition, the government nationalized banking, industry, and trade. Private enterprise in mainland China had been virtually abolished. Major political developments included the centralization of party and government administration. Elections were held in 1953 for delegates to the First National People's Congress, China's national legislature, which met in 1954. Only communist party members could run as candidates in these elections. The congress declared the state constitution of 1954 and formally elected Mao chairman (or president) of the People's Republic; it elected Liu Shaoqi (1898 – 1969) chairman of the Standing Committee of the National People's Congress; and named Zhou Enlai premier of the new State Council. In the midst of these major governmental changes, and helping to precipitate them, was a power struggle within the CCP leading to the 1954 purge of Political Bureau member Gao Gang and Party Organization Department head Rao Shushi, who were accused of illicitly trying to seize control of the party. The process of national integration also was characterized by improvements in party organization under the administrative direction of the secretary general of the party Deng Xiaoping (who served concurrently as vice premier of the State Council). There was a marked emphasis on recruiting intellectuals, who by 1956 constituted nearly 12 percent of the party's 10.8 million members. Peasant membership had decreased to 69 percent, while there was an increasing number of "experts", who were needed for the party and governmental infrastructures, in the party ranks. As part of the effort to encourage the participation of intellectuals in the new regime, in mid-1956 there began an official effort to liberalize the political climate. Cultural and intellectual figures were encouraged to speak their minds on the state of CCP rule and programs. Mao personally took the lead in the movement, which was launched under the classical slogan "Let a hundred flowers bloom, let the hundred schools of thought contend." At first the party's repeated invitation to air constructive views freely and openly was met with caution. By mid-1957, however, the movement unexpectedly mounted, bringing denunciation and criticism against the party in general and the excesses of its party members in particular. Startled and embarrassed, leaders turned on the critics as "bourgeois rightists" and launched the Anti-Rightist Campaign. The Hundred Flowers Campaign, sometimes called the Double Hundred Campaign, apparently had a sobering effect on the CCP leadership. The Great Leap Forward, 1958-1960 The anti rightist drive was followed by a militant approach toward economic development. In 1958 the CCP launched the Great Leap Forward campaign under the new "General Line for Socialist Construction." The Great Leap Forward was aimed at accomplishing the economic and technical development of the country at a vastly faster pace and with greater results. The shift to the left that the new "General Line" represented was brought on by a combination of domestic and external factors. Although the party leaders appeared generally satisfied with the accomplishments of the First Five-Year Plan, they—Mao and his fellow radicals in particular—believed that more could be achieved in the Second Five-Year Plan (1958 – 62) if the people could be ideologically aroused and if domestic resources could be utilized more efficiently for the simultaneous development of industry and agriculture. These assumptions led the party to an intensified mobilization of the peasantry and mass organizations, stepped-up ideological guidance and indoctrination of technical experts, and encouraged efforts to build a more responsive political system. The last of these undertakings was to be accomplished through a new xiafang (down to the countryside) movement, under which cadres inside and outside the party would be sent to factories, communes, mines, and public works projects for manual labor and firsthand familiarization with grassroots conditions. Although evidence is sketchy, Mao's decision to embark on the Great Leap Forward was based in part on his uncertainty about the Soviet policy of economic, financial, and technical assistance to China. That policy, in Mao's view, not only fell far short of his expectations and needs but also made him wary of the political and economic dependence in which China might find itself. The Great Leap Forward centered on a new socioeconomic and political system created in the countryside and in a few urban areas—the people's communes. By the fall of 1958, some 750,000 agricultural producers' cooperatives, now designated as production brigades, had been amalgamated into about 23,500 communes, each averaging 5,000 households or 22,000 people. The individual commune was placed in control of all the means of production and was to operate as the sole accounting unit; it was subdivided into production brigades (generally identical with traditional villages) and production teams. Each commune was planned as a self-supporting community for agriculture, small-scale local industry (for example, the famous backyard pig-iron furnaces), schooling, marketing, administration, and local security (maintained by militia organizations). Organized along paramilitary and laborsaving lines, the commune had communal kitchens, mess halls, and nurseries. In a way, the people's communes constituted a fundamental attack on the institution of the family, especially in a few model areas where radical experiments in communal living— large dormitories in place of the traditional nuclear family housing—occurred. (But those large dormitories were quickly dropped.) The system also was based on the assumption that it would release additional manpower for such major projects as irrigation works and hydroelectric dams, which were seen as integral parts of the plan for the simultaneous development of industry and agriculture. The Great Leap Forward was an economic failure and resulted in the Great Chinese Famine. In early 1959, amid signs of rising popular restiveness, the CCP admitted that the favorable production report for 1958 had been exaggerated. Among the Great Leap Forward's economic consequences were a shortage of food (in which natural disasters also played a part); shortages of raw materials for industry; overproduction of poor-quality goods; deterioration of industrial plants through mismanagement; and exhaustion and demoralization of the peasantry and of the intellectuals, not to mention the party and government cadres at all levels. Throughout 1959 efforts to modify the administration of the communes got under way; these were intended partly to restore some material incentives to the production brigades and teams, partly to decentralize control, and partly to house families that had been reunited as household units. Political consequences were not inconsiderable. In April 1959 Mao, who bore the chief responsibility for the Great Leap Forward fiasco, stepped down from his position as chairman of the People's Republic. The National People's Congress elected Liu Shaoqi as Mao's successor, though Mao remained chairman of the CCP. Moreover, Mao's Great Leap Forward policy came under open criticism at a party conference at Lushan, Jiangxi Province. The attack was led by Minister of National Defense Peng Dehuai, who had become troubled by the potentially adverse effect Mao's policies would have on the modernization of the armed forces. Peng argued that "putting politics in command" was no substitute for economic laws and realistic economic policy; unnamed party leaders were also admonished for trying to "jump into communism in one step." After the Lushan showdown, Peng Dehuai, who allegedly had been encouraged by Soviet leader Nikita Khrushchev to oppose Mao, was deposed. Peng was replaced by Lin Biao, a radical and opportunist Maoist. The new defense minister initiated a systematic purge of Peng's supporters from the military. Militancy on the domestic front was echoed in external policies. The "soft" foreign policy based on the Five Principles of Peaceful Coexistence to which China had subscribed in the mid-1950s gave way to a "hard" line in 1958. From August through October of that year, the Chinese resumed a massive artillery bombardment of the Nationalist-held offshore islands of Jinmen and Mazu, controlled by Taiwan. This was accompanied by an aggressive propaganda assault on the United States and a declaration of intent to "liberate" Taiwan. Chinese control over Tibet had been reasserted in 1950. The socialist revolution that took place thereafter increasingly became a process of imposing Chinese culture on the Tibetans. Tension culminated in a revolt in 1958 – 59 and the flight to India by the Dalai Lama—the Tibetans' spiritual and de facto temporal leader. Relations with India, where sympathy for the rebels was aroused, deteriorated as thousands of Tibetan refugees crossed the Indian border. There were several border incidents in 1959, and a brief Sino-Indian border war erupted in October 1962 as China laid claim to Aksai Chin—nearly 103,600 square kilometers of territory that India regarded as its own. The Soviet Union gave India its moral support in the dispute, thus contributing to the growing tension between Beijing and Moscow. The Sino-Soviet dispute of the late 1950s was the most important development in Chinese foreign relations. The Soviet Union had been China's principal benefactor and ally, but relations between the two were cooling. The Soviet agreement in late 1957 to help China produce its own nuclear weapons and missiles was terminated by mid-1959. From that point until the mid-1960s, the Soviets recalled all of their technicians and advisers from China and reduced or canceled economic and technical aid to China. The discord was occasioned by several factors. The two countries differed in their interpretation of the nature of "peaceful coexistence." The Chinese took a more militant and unyielding position on the issue of anti-imperialist struggle, but the Soviets were unwilling, for example, to give their support on the Taiwan question. In addition, the two communist powers disagreed on doctrinal matters. The Chinese accused the Soviets of "revisionism"; the latter countered with charges of "dogmatism." Rivalry within the international communist movement also exacerbated Sino-Soviet relations. An additional complication was the history of suspicion each side had toward the other, especially the Chinese, who had lost a substantial part of territory to Tsarist Russia in the mid-nineteenth century. Whatever the causes of the dispute, the Soviet suspension of aid was a blow to the Chinese scheme for developing industrial and high-level (including nuclear) technology. The Sino-Soviet Split Relations between the USSR and the PRC had begun to deteriorate in 1956 after Khrushchev revealed his “Secret Speech” at the 20th Communist Party Congress. The “Secret Speech” criticized many of Stalin’s policies, especially his purges of Party members, and it marked the beginning of Khrushchev’s de-Stalinization process. This created a serious domestic problem for Mao, who had supported many of Stalin’s policies and modeled many of his own after them. With Khrushchev’s denouncement of Stalin, many people questioned Mao’s decisions. Moreover, the emergence of movements fighting for the reforms of the existing communist systems across East-Central Europe after Khrushchev’s speech worried Mao. Brief political liberalization introduced to prevent similar movements in China, most notably lessened political censorship known as the Hundred Flowers Campaign, backfired against Mao, whose position within the Party only weakened. This convinced him further that de-Stalinization was a mistake. Mao took a sharp turn to the left ideologically, which contrasted with the ideological softening of de-Stalinization. With Khrushchev’s strengthening position as Soviet leader, the two countries were set on two different ideological paths. Mao’s implementation of the Great Leap Forward, which utilized communist policies closer to Stalin than to Khrushchev, included forming a personality cult around Mao, as well as instituting Stalinist economic policies. This angered the USSR, especially after Mao criticized Khrushchev’s economic policies through the plan while also calling for more Soviet aid. The Soviet leader saw the new policies as evidence of an increasingly confrontational and unpredictable China. At first, the Sino-Soviet split manifested indirectly as criticism towards each other’s client states. China denounced Yugoslavia and Tito, who pursued a non-aligned foreign policy, while the USSR denounced Enver Hoxha and the People’s Socialist Republic of Albania, which refused to abandon its pro-Stalin stance and sought its survival in alignment with China. The USSR also offered moral support to the Tibetan rebels in their 1959 Tibetan uprising against China. By 1960, the mutual criticism moved out in the open, when Khrushchev and Peng Zhen had an open argument at the Romanian Communist Party congress. Khrushchev characterized Mao as “a nationalist, an adventurist, and a deviationist.” In turn, China’s Peng Zhen called Khrushchev a Marxist revisionist, criticizing him as “patriarchal, arbitrary and tyrannical.” Khrushchev denounced China with an 80-page letter to the conference and responded to Mao by withdrawing around 1,400 Soviet experts and technicians from China, leading to the cancellation of more than 200 scientific projects intended to foster cooperation between the two nations. After a series of unconvincing compromises and explicitly hostile gestures, in 1962, the PRC and the USSR finally broke relations. Mao criticized Khrushchev for withdrawing from the Cuban missile crisis (1962). Khrushchev replied angrily that Mao’s confrontational policies would lead to a nuclear war. In the wake of the Cuban missile crisis, nuclear disarmament was brought to the forefront of geopolitics. To curb the production of nuclear weapons in other nations, the Soviet Union, Britain, and the U.S. signed the Limited Test Ban Treaty in 1963. At the time, China was developing its own nuclear weaponry and Mao saw the treaty as an attempt to slow China’s advancement as a superpower. This was the final straw for Mao, who from September 1963 to July 1964 published nine letters openly criticizing every aspect of Khrushchev’s leadership. The Sino-Soviet alliance then completely collapsed, and Mao turned to other Asian, African, and Latin American countries to develop new and stronger alliances and further the PRC’s economic and ideological redevelopment. Readjustment and Recovery, 1961-1965 Meanwhile in the early 1960s, Mao faced criticism within China as well. In 1961 the political tide in China began to swing to the right, as evidenced by the ascendancy of a more moderate leadership. In an effort to stabilize the economic front, for example, the party—still under Mao's titular leadership but under the dominant influence of Liu Shaoqi, Deng Xiaoping, Chen Yun, Peng Zhen, Bo Yibo, and others—initiated a series of corrective measures. Among these measures was the reorganization of the commune system, with the result that production brigades and teams had more say in their own administrative and economic planning. To gain more effective control from the center, the CCP reestablished its six regional bureaus and initiated steps aimed at tightening party discipline and encouraging the leading party cadres to develop populist-style leadership at all levels. The efforts were prompted by the party's realization that the arrogance of party and government functionaries had engendered only public apathy. On the industrial front, much emphasis was then placed on realistic and efficient planning; ideological fervor and mass movements were no longer the controlling themes of industrial management. Production authority was restored to factory managers. Another notable emphasis after 1961 was the party's greater interest in strengthening the defense and internal security establishment. By early 1965 the country was well on its way to recovery under the direction of the party apparatus or, to be more specific, the Central Committee's Secretariat headed by Secretary General Deng Xiaoping. The Cultural Revolution Key Terms / Key Concepts Cultural Revolution: a sociopolitical movement in China from 1966 until 1976; set into motion by Mao Zedong, then Chairman of the Communist Party of China Down to the Countryside Movement: a policy instituted by Mao Zedong in the People’s Republic of China in the late 1960s and early 1970s, instigated by what was perceived as anti-bourgeois thinking prevalent during the Cultural Revolution and resulting in certain privileged urban youth being sent to farming villages to work Gang of Four: a political faction composed of four Chinese Communist Party officials that came to prominence during the Cultural Revolution (1966 – 76) and was later charged with a series of treasonous crimes Red Guards: a fanatic student mass paramilitary social movement mobilized by Mao Zedong in 1966 and 1967 during the Cultural Revolution struggle sessions: a form of public humiliation and torture used by the Communist Party of China in the Mao Zedong era, particularly during the Cultural Revolution, to shape public opinion and humiliate, persecute, or execute political rivals and class enemies. Origins of the Cultural Revolution In the early 1960s, Mao was on the political sidelines and in semi seclusion. By 1962, however, he began an offensive to purify the party, having grown increasingly uneasy about what he believed were the creeping "capitalist" and antisocialist tendencies in the country. As a hardened veteran revolutionary who had overcome the severest adversities, Mao continued to believe that the material incentives that had been restored to the peasants and others were corrupting the masses and were counterrevolutionary. To stop the so-called capitalist trend, Mao launched the Socialist Education Movement, for which the primary emphasis was on restoring ideological purity, reinfusing revolutionary fervor into the party and government bureaucracies, and intensifying class struggle. There were internal disagreements, however, not on the aim of the movement but on the methods of carrying it out. Opposition came mainly from the moderates represented by Liu Shaoqi and Deng Xiaoping, who were unsympathetic to Mao's policies. The Socialist Education Movement was soon paired with another Mao campaign, the theme of which was "to learn from the People's Liberation Army." Minister of National Defense Lin Biao's rise to the center of power was increasingly conspicuous. It was accompanied by his call on the PLA and the CCP to accentuate Maoist thought as the guiding principle for the Socialist Education Movement and for all revolutionary undertakings in China. In connection with the Socialist Education Movement, a thorough reform of the school system, which had been planned earlier to coincide with the Great Leap Forward, went into effect. The reform was intended as a work-study program, a new xiafang movement—in which schooling was slated to accommodate the work schedule of communes and factories. It had the dual purpose of providing mass education less expensively than previously offered and of re-educating intellectuals and scholars to accept the need for their own participation in manual labor. The drafting of intellectuals for manual labor was part of the party's rectification campaign, publicized through the mass media as an effort to remove "bourgeois" influences from professional workers—particularly, their tendency to have greater regard for their own specialized fields than for the goals of the party. Official propaganda accused them of being more concerned with having "expertise" than being "red". The Militant Phase, 1966-1968 By mid-1965 Mao had gradually but systematically regained control of the party with the support of Lin Biao—Jiang Qing (Mao's fourth wife)—and Chen Boda—a leading theoretician. In late 1965 a leading member of Mao's "Shanghai Mafia," Yao Wenyuan, wrote a thinly veiled attack on the deputy mayor of Beijing—Wu Han. In the next six months, under the guise of upholding ideological purity, Mao and his supporters purged or attacked a wide variety of public figures, including State Chairman Liu Shaoqi and other party and state leaders. By mid-1966 Mao's campaign had erupted into what came to be known as the Great Proletarian Cultural Revolution—the first mass action to have emerged against the CCP apparatus itself. Considerable intraparty opposition to the Cultural Revolution was evident. On the one side was the Mao-Lin Biao group, supported by the PLA; on the other side was a faction led by Liu Shaoqi and Deng Xiaoping, which had its strength in the regular party machine. Premier Zhou Enlai, while remaining personally loyal to Mao, tried to mediate or to reconcile the two factions. Mao felt that he could no longer depend on the formal party organization, convinced that it had been permeated with the "capitalist" and bourgeois obstructionists. He turned to Lin Biao and the PLA to counteract the influence of those who were allegedly “‘left’ in form but ‘right’ in essence.” The PLA was widely extolled as a “great school” for the training of a new generation of revolutionary fighters and leaders. Maoists also turned to high school students for political demonstrations on their behalf. These students, joined by some university students, came to be known as the Red Guards. Millions of Red Guards were encouraged by the Cultural Revolution group to become a “shock force” and to “bombard” with criticism both the regular party headquarters in Beijing and those at the regional and provincial levels. Red Guard activities were promoted as a reflection of Mao's policy of rekindling revolutionary enthusiasm and destroying "outdated," "counterrevolutionary" symbols and values. Mao's ideas, popularized in the Quotations from Chairman Mao, became the standard by which all revolutionary efforts were to be judged. The "four big rights"—speaking out freely, airing views fully, holding great debates, and writing big-character posters—became an important factor in encouraging Mao's youthful followers to criticize his intraparty rivals. The "four big rights" became such a major feature during the period that they were later institutionalized in the state constitution of 1975. The result of the unfettered criticism of established organs of control by China's exuberant youth was massive civil disorder, punctuated also by clashes among rival Red Guard gangs and between the gangs and local security authorities. The party organization was shattered from top to bottom. (The Central Committee's Secretariat ceased functioning in late 1966.) The resources of the public security organs were severely strained. Faced with imminent anarchy, the PLA—the only organization whose ranks for the most part had not been radicalized by Red Guard-style activities—emerged as the principal guarantor of law and order and the de facto political authority. And although the PLA was under Mao's rallying call to "support the left," PLA regional military commanders ordered their forces to restrain the leftist radicals, thus restoring order throughout much of China. The PLA also was responsible for the appearance in early 1967 of the revolutionary committees—a new form of local control that replaced local party committees and administrative bodies. The revolutionary committees were staffed with Cultural Revolution activists, trusted cadres, and military commanders, the latter frequently holding the greatest power. The radical tide receded somewhat beginning in late 1967, but it was not until after mid-1968 that Mao came to realize the uselessness of further revolutionary violence. Liu Shaoqi, Deng Xiaoping, and their fellow “revisionists” and “capitalist roaders” had been purged from public life by early 1967, and the Maoist group had since been in full command of the political scene. Viewed in larger perspective, the need for domestic calm and stability was occasioned perhaps even more by pressures emanating from outside China. The Chinese were alarmed in 1966 – 68 by steady Soviet military buildups along their common border. The Soviet invasion of Czechoslovakia in 1968 heightened Chinese apprehensions. In March 1969 Chinese and Soviet troops clashed on Zhenbao Island (known to the Soviets as Damanskiy Island) in the disputed Wusuli Jiang (Ussuri River) border area. The tension on the border had a sobering effect on the fractious Chinese political scene and provided the regime with a new and unifying rallying call. The Ninth National Party Congress to the Demise of Lin Biao, 1969-1971 The activist phase of the Cultural Revolution—considered to be the first in a series of cultural revolutions—was brought to an end in April 1969. This end was formally signaled at the CCP's Ninth National Party Congress, which convened under the dominance of the Maoist group. Mao was confirmed as the supreme leader. Lin Biao was promoted to the post of CCP vice chairman and was named as Mao's successor. Others who had risen to power by means of Cultural Revolution machinations were rewarded with positions on the Political Bureau; a significant number of military commanders were appointed to the Central Committee. The party congress also marked the rising influence of two opposing forces, Mao's wife, Jiang Qing, and Premier Zhou Enlai. The general emphasis after 1969 was on reconstruction through rebuilding of the party, economic stabilization, and greater sensitivity to foreign affairs. Pragmatism gained momentum as a central theme of the years following the Ninth National Party Congress, but this tendency was paralleled by efforts of the radical group to reassert itself. The radical group—Kang Sheng, Xie Fuzhi, Jiang Qing, Zhang Chunqiao, Yao Wenyuan, and Wang Hongwen—no longer had Mao's unqualified support. By 1970 Mao viewed his role more as that of the supreme elder statesman than of an activist in the policy-making process. This was probably the result as much of his declining health as of his view that a stabilizing influence should be brought to bear on a divided nation. As Mao saw it, China needed both pragmatism and revolutionary enthusiasm, each acting as a check on the other. Factional infighting would continue unabated through the mid-1970s, although an uneasy coexistence was maintained while Mao was alive. The rebuilding of the CCP got under way in 1969. The process was difficult, however, given the pervasiveness of factional tensions and the discord carried over from the Cultural Revolution years. Differences persisted among the military, the party, and left-dominated mass organizations over a wide range of policy issues, to say nothing of the radical-moderate rivalry. It was not until December 1970 that a party committee could be reestablished at the provincial level. In political reconstruction two developments were noteworthy. As the only institution of power for the most part left unscathed by the Cultural Revolution, the PLA was particularly important in the politics of transition and reconstruction. The PLA was, however, not a homogeneous body. In 1970 – 71 Zhou Enlai was able to forge a centrist-rightist alliance with a group of PLA regional military commanders who had taken exception to certain of Lin Biao's policies. This coalition paved the way for a more moderate party and government leadership in the late 1970s and 1980s. The PLA was divided largely on policy issues. On one side of the infighting was the Lin Biao faction, which continued to exhort the need for “politics in command” and for an unremitting struggle against both the Soviet Union and the United States. On the other side was a majority of the regional military commanders, who had become concerned about the effect Lin Biao's political ambitions would have on military modernization and economic development. These commanders' views generally were in tune with the positions taken by Zhou Enlai and his moderate associates. Specifically, the moderate groups within the civilian bureaucracy and the armed forces spoke for more material incentives for the peasantry, efficient economic planning, and a thorough reassessment of the Cultural Revolution. They also advocated improved relations with the West in general and the United States in particular—if for no other reason than to counter the perceived expansionist aims of the Soviet Union. Generally, the radicals' objection notwithstanding, the Chinese political tide shifted steadily toward the right of center. Among the notable achievements of the early 1970s was China's decision to seek reconciliation with the United States, as dramatized by President Richard M. Nixon's visit in February 1972. In September 1972 diplomatic relations were established with Japan. Without question, the turning point in the decade of the Cultural Revolution was Lin Biao's abortive coup attempt and his subsequent death in a plane crash as he fled China in September 1971. The immediate consequence was a steady erosion of the fundamentalist influence of the left-wing radicals. Lin Biao's closest supporters were purged systematically. Efforts to depoliticize and promote professionalism were intensified within the PLA. These were also accompanied by the rehabilitation of those persons who had been persecuted or fallen into disgrace in 1966 – 68. End of the Era of Mao Zedong, 1972-1976 Among the most prominent of those rehabilitated was Deng Xiaoping, who was reinstated as a vice premier in April 1973, ostensibly under the aegis of Premier Zhou Enlai but certainly with the concurrence of Mao Zedong. Together, Zhou Enlai and Deng Xiaoping came to exert strong influence. Their moderate line favoring modernization of all sectors of the economy was formally confirmed at the Tenth National Party Congress in August 1973, at which time Deng Xiaoping was made a member of the party's Central Committee (but not yet of the Political Bureau). The radical camp fought back by building an armed urban militia, but its mass base of support was limited to Shanghai and parts of northeastern China—hardly sufficient to arrest what it denounced as “revisionist” and “capitalist” tendencies. In January 1975 Zhou Enlai, speaking before the Fourth National People's Congress, outlined a program of what has come to be known as the Four Modernizations for the four sectors of agriculture, industry, national defense, and science and technology. This program would be reaffirmed at the Eleventh National Party Congress, which convened in August 1977. Also in January 1975, Deng Xiaoping's position was solidified by his election as a vice chairman of the CCP and as a member of the Political Bureau and its Standing Committee. Deng also was installed as China's first civilian chief of PLA General Staff Department. The year 1976 saw the deaths of the three most senior officials in the CCP and the state apparatus: Zhou Enlai in January, Zhu De (then chairman of the Standing Committee of the National People's Congress and acting head of state) in July, and Mao Zedong in September. In April of the same year, masses of demonstrators in Tiananmen Square in Beijing memorialized Zhou Enlai and criticized Mao's closest associates, Zhou's opponents. In June the government announced that Mao would no longer receive foreign visitors. In July an earthquake devastated the city of Tangshan in Hebei Province. These events, along with the deaths of the three Communist leaders, contributed to a popular sense that the “mandate of heaven” had been withdrawn from the ruling party. At best the nation was in a state of serious political uncertainty. Deng Xiaoping, the logical successor as premier, received a temporary setback after Zhou's death, when radicals launched a major counterassault against him. In April 1976 Deng was once more removed from all his public posts, and a relative political unknown, Hua Guofeng—a Political Bureau member, vice premier, and minister of public security—was named acting premier and party first vice chairman. Even though Mao Zedong's role in political life had been sporadic and shallow in his later years, it was crucial. Despite Mao's alleged lack of mental acuity, his influence in the months before his death remained such that his orders to dismiss Deng and appoint Hua Guofeng were accepted immediately by the Political Bureau. The political system had polarized in the years before Mao's death into increasingly bitter and irreconcilable factions. While Mao was alive—and playing these factions off against each other—the contending forces were held in check. His death resolved only some of the problems inherent in the succession struggle. The radical clique most closely associated with Mao and the Cultural Revolution became vulnerable after Mao died, as Deng had been after Zhou Enlai's demise. In October, less than a month after Mao's death, Jiang Qing and her three principal associates—denounced as the Gang of Four—were arrested with the assistance of two senior Political Bureau members, Minister of National Defense Ye Jianying (1897 – 1986) and Wang Dongxing, commander of the CCP's elite bodyguard. Within days it was formally announced that Hua Guofeng had assumed the positions of party chairman, chairman of the party's Central Military Commission, and premier. The Post-Mao Period, 1976-1978 The jubilation following the incarceration of the Gang of Four and the popularity of the new ruling triumvirate (Hua Guofeng, Ye Jianying, and Li Xiannian) were succeeded by calls for the restoration to power of Deng Xiaoping and the elimination of leftist influence throughout the political system. By July 1977, at no small risk to undercutting Hua Guofeng's legitimacy as Mao's successor and seeming to contradict Mao's apparent will, the Central Committee exonerated Deng Xiaoping. Deng admitted some shortcomings in the events of 1975, and finally, at a party Central Committee session, he resumed all the posts from which he had been removed in 1976. The post-Mao political order was given its first vote of confidence at the Eleventh National Party Congress, held August 12 – 18, 1977. Hua was confirmed as party chairman, and Ye Jianying, Deng Xiaoping, Li Xiannian, and Wang Dongxing were elected vice chairmen. The congress proclaimed the formal end of the Cultural Revolution, blamed it entirely on the Gang of Four, and reiterated that “the fundamental task of the party in the new historical period is to build China into a modern, powerful socialist country by the end of the twentieth century.” Many contradictions still were apparent regarding the Maoist legacy and the possibility of future cultural revolutions. However, the stage was set for China to move in a new direction under the leadership of Deng Xiaoping. Consequences of the Cultural Revolution The Cultural Revolution was a sociopolitical movement, set into motion by Mao, that started in 1966 and ended in 1976 and whose stated goal was to preserve 'true’ Communist ideology in China by purging remnants of capitalist and traditional elements from Chinese society and reimposing Maoism as the dominant ideology within the Party. The Revolution marked the return of Mao to a position of power after the Great Leap Forward. The Revolution was launched after Mao alleged that bourgeois elements had infiltrated the government and society at large, aiming to restore capitalism. He insisted that these “revisionists” be removed through violent class struggle. China’s youth responded to Mao’s appeal by forming Red Guard groups around the country. The movement spread into the military, urban workers, and the Communist Party leadership itself. It resulted in widespread factional struggles in all walks of life. In the top leadership, it led to a mass purge of senior officials, most notably Liu Shaoqi and Deng Xiaoping. During the same period, Mao’s personality cult grew to immense proportions. Millions of people were persecuted in the violent struggles that ensued across the country and suffered a wide range of abuses, including public humiliation, arbitrary imprisonment, torture, sustained harassment, and seizure of property. A large segment of the population was forcibly displaced, most notably the transfer of urban youth to rural regions during the Down to the Countryside Movement. Mao set the scene for the Cultural Revolution by “cleansing” Beijing of powerful officials of questionable loyalty. His approach was less than transparent. He achieved this purge through newspaper articles, internal meetings, and skillfully employing his network of political allies. The start of the Cultural Revolution brought huge numbers of Red Guards to Beijing, with all expenses paid by the government. The revolution aimed to destroy the “Four Olds” (old customs, old culture, old habits, and old ideas) and establish the corresponding “Four News,” which ranged from the changing of names and haircuts to ransacking homes, vandalizing cultural treasures, and desecrating temples. In a few years, countless ancient buildings, artifacts, antiques, books, and paintings were destroyed by the members of the Red Guards. Believing that certain liberal bourgeois elements of society continued to threaten the socialist framework, the Red Guards struggled against authorities at all levels of society and even set up their own tribunals. Chaos reigned in much of the nation. During the Cultural Revolution, nearly all of the schools and universities in China were closed and the young intellectuals living in cities were ordered to the countryside to be “re-educated” by the peasants, where they performed hard manual labor and other work. The Cultural Revolution led to the destruction of much of China’s traditional cultural heritage and the imprisonment of a huge number of citizens, as well as general economic and social chaos. Millions of lives were ruined during this period as the Cultural Revolution pierced every part of Chinese life. It is estimated that hundreds of thousands, perhaps millions, perished in the violence of the Cultural Revolution. The Revolution aimed to get rid of those who allegedly promoted bourgeois ideas, as well as those who were seen as coming from an exploitative family background or belonged to one of the Five Black Categories (landlords, rich farmers, counter-revolutionaries, bad-influencers or “bad elements,” and rightists). Many people perceived to belong to any of these categories, regardless of guilt or innocence, were publicly denounced, humiliated, and beaten in so-called "struggle sessions". In their revolutionary fervor, students denounced their teachers and children denounced their parents. During the Cultural Revolution, libraries full of historical and foreign texts were destroyed and books were burned. Temples, churches, mosques, monasteries, and cemeteries were closed down and sometimes converted to other uses, looted, and destroyed. Among the countless acts of destruction, Red Guards from Beijing Normal University desecrated and badly damaged the burial place of Confucius. Although the effects of the Cultural Revolution were disastrous for millions of people in China, there were some positive outcomes, particularly in the rural areas. For example, the upheavals of the Cultural Revolution and the hostility towards the intellectual elite are widely accepted to have damaged the quality of education in China, especially the higher education system. However, some policies also provided many in the rural communities with middle school education for the first time, which facilitated rural economic development in the 1970s and 80s. Similarly, a large number of health personnel was deployed to the countryside. Some farmers were given informal medical training and healthcare centers were established in rural communities. This led to a marked improvement in the health and the life expectancy of the general population. The Cultural Revolution also brought to the forefront numerous internal power struggles within the Party, many of which had little to do with the larger battles between Party leaders but resulted instead from local factionalism and petty rivalries that were usually unrelated to the Revolution itself. Because of the chaotic political environment, local governments lacked organization and stability, if they existed at all. Members of different factions often fought on the streets and political assassinations, particularly in predominantly rural provinces, were common. The masses spontaneously involved themselves in factions and took part in open warfare against other factions. The ideology that drove these factions was vague and sometimes non-existent, with the struggle for local authority being the only motivation for mass involvement. The Cultural Revolution wreaked havoc on minority cultures in China. In Inner Mongolia, some 790,000 people were persecuted. In Xinjiang, copies of the Qur’an and other books of the Uyghur people were burned. Muslim imams were reportedly paraded around with paint splashed on their bodies. In the ethnic Korean areas of northeast China, language schools were destroyed. In Yunnan Province, the palace of the Dai people’s king was torched. The massacre of Muslim Hui people at the hands of the People’s Liberation Army in Yunnan, known as the Shadian Incident, reportedly claimed over 1,600 lives in 1975. Impact of Cultural Revolution on Sino-Soviet Relations The Sino-Soviet split, seen by historians as one of the key events of the Cold War, had massive consequences for the two powers and for the world. The USSR had a network of communist parties it supported. China created its own rival network to battle it out for local control of the left in numerous countries. The divide fractured the international communist movement at the time and opened the way for the warming of relations between the U.S. and China under Richard Nixon and Mao in 1971. In China, Mao launched the Cultural Revolution (1966 – 76), largely to prevent the development of Russian-style bureaucratic communism of the USSR.The ideological split also escalated to small-scale warfare between Russia and China, with a revived conflict over the Russo-Chinese border demarcated in the 19th century (starting in 1966) and Red Guards attacking the Soviet embassy in Beijing (1967). In the 1970s, Sino-Soviet ideological rivalry extended to Africa and the Middle East, where the Soviet Union and China funded and supported opposed political parties, militias, and states. After the regime of Mao Zedong, the PRC–USSR ideological schism no longer shaped domestic politics but continued to impact geopolitics. The initial Soviet-Chinese proxy war occurred in Indochina in 1975, where the Communist victory of the National Liberation Front (Viet Cong) and of North Vietnam in the 30-year Vietnam War had produced a post–colonial Indochina that featured pro-Soviet regimes in Vietnam (Socialist Republic of Vietnam) and Laos (Lao People’s Democratic Republic), and a pro-Chinese regime in Cambodia (Democratic Kampuchea). At first, Vietnam ignored the Khmer Rouge domestic reorganization of Cambodia by the Pol Pot regime (1975 – 79) as an internal matter, until the Khmer Rouge attacked the ethnic Vietnamese populace of Cambodia and the border with Vietnam. The counterattack precipitated the Cambodian-Vietnamese War (1975 – 79) that deposed Pol Pot in 1978. In response, the PRC denounced the Vietnamese and retaliated by invading northern Vietnam in the Sino-Vietnamese War (1979). In turn, the USSR denounced the PRC’s invasion of Vietnam. In 1979, the USSR invaded the Democratic Republic of Afghanistan to sustain the Afghan Communist government. The PRC viewed the Soviet invasion as a local ploy within Soviet’s greater geopolitical encirclement of China. In response, the PRC entered a tripartite alliance with the U.S. and Pakistan to sponsor Islamist Afghan armed resistance to the Soviet occupation (1979 – 89). Relations between China and the Soviet Union remained tense until the visit of Soviet leader Mikhail Gorbachev to Beijing in 1989. Primary Source: Editorial of the Liberation Army Daily (Jiefangjun Bao) Editorial of the Liberation Army Daily (Jiefangjun Bao): Mao Tse-Tung's (Mao Zedong) Thought is the Telescope and Microscope of Our Revolutionary Cause June 7, 1966 The current great socialist cultural revolution is a great revolution to sweep away all monsters and a great revolution that remoulds the ideology of people and touches their souls. What weapon should be used to sweep away all monsters? What ideology should be applied to arm people's minds and remould their souls? The most powerful ideological weapon, the only one, is the great Mao Tse-tung's thought.Mao Tse-tung's (Mao Zedong) thought is our political orientation, the highest instruction for our actions; it is our ideological and political telescope and microscope for observing and analysing all things, In this unprecedented great cultural revolution, we should use Mao Tse-tung's thought to observe, analyse and transform everything, and, in a word, put it in command of everything. We should use Mao Tse-tung's thought to storm the enemy's positions and seize victory .. . . . Our struggle against the black anti-Party, anti-socialist line and gangsters is a mighty, life-and-death class struggle. The enemies without guns are more hidden, cunning, sinister and vicious than the enemies with guns. The representatives of the bourgeoisie and all monsters, including the modern revisionists, often oppose the red flag by hoisting a red flag and oppose Marxism-Leninism and Mao Tse-tung's thought under the cloak of Marxism-Leninism and Mao Tse-tung's thought when they attack the Party and socialism, because Marxism-Leninism and Mao Tse-tung's thought are becoming more popular day by day, our Party and Chairman Mao enjoy an incomparably high prestige and the dictatorship of the proletariat in our country is becoming more consolidated. These are the tactics that the revisionists always use in opposing Marxism-Leninism. This is a new characteristic of the class struggle under the conditions of the dictatorship of the proletariat.The many facts exposed during the great cultural revolution show us more clearly that the anti-Party and anti-socialist elements are all careerists, schemers and hypocrites of the exploiting classes. They indulge in double-dealing. They feign compliance while acting in opposition. They appear to be men but are demons at heart. They speak human language to your face, but talk devil's language behind your back. They are wolves in sheep's clothing and man-eating tigers with smiling faces. They often use the phrases of Marxism-Leninism and Mao Tse-tung's thought as a cover while greatly publicizing diametrically opposed views behind the word "but" and smuggling in bourgeois and revisionist stuff. Enemies holding a false red banner are ten times more vicious than enemies holding a white banner. Wolves in sheep's clothing are ten times more sinister than ordinary wolves. Tigers with smiling faces arc ten times more ferocious than tigers with their fangs bared and their claws sticking out. Sugar-coated bullets are ten times more destructive than real bullets. A fortress is most vulnerable when attacked from within. Enemies who have wormed their way into our ranks are far more dangerous than enemies operating in the open. We must give this serious attention and be highly vigilant.In such a very complicated and acute class struggle, how are we to draw a clear-cut line between the enemy and ourselves and maintain a firm stand? How are we to distinguish between revolutionaries and counter-revolutionaries, genuine revolutionaries and sham revolutionaries, and Marxism-Leninism and revisionism? We must master Mao Tse-tung's thought, the powerful ideological weapon, and use it as a telescope and a microscope to observe all matters. With the invincible Mao Tse-tung's thought, with the scientific world outlook and methodology of dialectical materialism and historical materialism which have been developed by Chairman Mao, and with the sharp weapon of Chairman Mao's theory of classes and class struggle, we have the highest criterion for judging right and wrong. . . Chairman Mao teaches us, "The proletariat seeks to transform the world according to its own world outlook, so does the bourgeoisie." In the sharp clash between the two world outlooks, either you crush me, or I crush you. It will not do to sit on the fence; there is no middle road. The overthrown bourgeoisie, in their plots for restoration and subversion, always give first place to ideology, take hold of ideology and the superstructure. The representatives of the bourgeoisie, by using their position and power, usurped and controlled the leadership of a number of departments, did all they could to spread bourgeois and revisionist poison through the media of literature, the theatre, films, music, the arts, the press, periodicals, the radio, publications and academic research and schools, etc., in an attempt to corrupt people's minds and perpetrate "peaceful evolution" as ideological preparation and preparation of public opinion for capitalist restoration. If our proletarian ideology does not take over the position, then the bourgeois ideology will have free rein; it will gradually nibble away and chew you up bit by bit. Once proletarian ideology gives way, so will the superstructure and the economic base and this means the restoration of capitalism, Therefore, we must arm our minds with Mao Tse-tung's thought and establish a firm proletarian world outlook. We must use the great Mao Tse-tung's thought to fight and completely destroy the bourgeois ideological and cultural positions.Mao Tse-tung's thought is the acme of Marxism-Leninism in the present era. It is living Marxism-Leninism at its highest. It is the powerful, invincible weapon of the Chinese people, and it is also a powerful, invincible weapon of the revolutionary people the world over. Mao Tse-tung's thought has proved to be the invincible truth through the practice of China's democratic revolution, socialist revolution and socialist construction, and through the struggle in the international sphere against U.S. imperialism and its lackeys and against Khrushchev revisionism. Chairman Mao has, with the gifts of genius, creatively and comprehensively developed Marxism-Leninism. Basing himself on the fundamental theses of Marxism-Leninism, Chairman Mao has summed up the experience of the practice of the Chinese revolution and the world revolution, and the painful lesson of the usurpation of the leadership of the Party and the state of the Soviet Union by the modern revisionist clique, systematically put forward the theory concerning classes, class contradictions and class struggle that exist in socialist society, greatly enriched and developed the Marxist-Leninist theory on the dictatorship of the proletariat, and put forward a series of wise policies aimed at opposing and preventing revisionism and the restoration of capitalism. . . . Every sentence by Chairman Mao is the truth, and carries more weight than ten thousand ordinary sentences. As the Chinese people master Mao Tse-tung's thought, China will be prosperous and ever-victorious. Once the world's people master Mao Tse-tung's thought ,which is living Marxism-Leninism, they are sure to win their emancipation, bury imperialism, modern revisionism and all reactionaries lock, stock and barrel, and realize communism throughout the world step by step.The most fundamental task in the great socialist cultural revolution in our country is to eliminate thoroughly the old ideology and culture, the old customs and habits which were fostered by all the exploiting classes for thousands of years to poison the minds of the people, and to create and form an entirely new, proletarian ideology and culture, new customs and habits among the masses of the people. This is to creatively study and apply Mao Tse-tung's thought in tempestuous class struggle, popularize it and let it become closely integrated with the masses of workers, peasants and soldiers. Once the masses grasp it, Mao Tse-tung's thought will be transformed into a mighty material force. Facts show that those armed with Mao Tse-tung's thought are the bravest, wisest, most united, most steadfast in class stand and have the sharpest sight. In this great, stormy cultural revolution, the masses of workers, peasants and soldiers are playing the role of the main force -this is the result of their efforts in creatively studying and applying Mao Tse-tung's thought and arming their ideology with it. This is another eloquent proof of the fact that when the masses of workers, peasants and soldiers master the political telescope and microscope of Mao Tse-tung's thought, they are invincible and ever-triumphant. . . .The attitude towards Mao Tse-tung's thought, whether to accept it or resist it, to support it or oppose it, to love it warmly or be hostile to it, this is the touchstone to test and the watershed between true revolution and sham revolution, between revolution and counter-revolution, between Marxism-Leninism and revisionism. He who wants to make revolution must accept Mao Tse-tung's thought and act in accordance with it. A counter-revolutionary will inevitably disparage, distort, resist, attack and oppose Mao Tse-tung's thought. The "authorities" of the bourgeoisie and all monsters, including the modern revisionists, use every means to slander Mao Tse-tung's thought, and they are extremely hostile to the creative study and application of Mao Tse-tung's works by the masses of workers, peasants and soldiers. They wildly attack the creative study and application of Mao Tse-tung's works by workers, peasants and soldiers as "philistinism," "over-simplification" and "pragmatism." The only explanation is that this flows from their exploiting class instinct. They fear Mao Tse-tung's thought, the revolutionary truth of the proletariat, and particularly the integration of Mao Tse-tung's thought with the worker, peasant and soldier masses. Once the workers, peasants and soldiers master the sharp weapon of Mao Tse-tung's thought, all monsters have no ground left to stand on. All their intrigues and plots will be thoroughly exposed, their ugly features will be brought into the broad light of day and their dream to restore capitalism will be utterly shattered.The class enemy won't fall down if you don't bit him. lie still tries to rise to his feet after he has fallen. When one black- line is eliminated, another appears. When one gang of representatives of the bourgeoisie has been laid low, a new one takes the stage. We must follow the instructions of the Central Committee of the Communist Party of China and never forget the class struggle, never forget the dictatorship of the proletariat, never forget to give prominence to politics, never forget to hold aloft the great red banner of Mao Tse-tung's thought. We must firmly give prominence to politics. We must creatively study and apply still better Chairman Mao Tse-tung's works, putting stress on the importance of application. We must consider Chairman Mao's works the supreme directive for all our work. We must master Mao Tse-tung's thought and pass it on from generation to generation. This is dictated by the needs of the revolution, the situation, the struggle against the enemy, the preparations to smash aggressive war by U.S. imperialism, of opposing and preventing revisionism, preventing the restoration of capitalism, of building socialism with greater, faster, better and more economical results and of ensuring the gradual transition from socialism to communism in China. Chairman Mao is the radiant sun lighting our minds. Mao Tse-tung's thought is our lifeline. Those who oppose Mao Tse-tung's thought, no matter when they do so and what kind of "authorities" they are, will be denounced by the entire Party and the whole nation. Source:from The Great Socialist Cultural Revolution in China (Peking: Foreign Languages Press, 1966), III, 11-17. This text is part of the Internet Modern History Sourcebook. The Sourcebook is a collection of public domain and copy-permitted texts for introductory level classes in modern European and World history. Unless otherwise indicated the specific electronic form of the document is copyright. Permission is granted for electronic copying, distribution in print form for educational purposes and personal use. If you do reduplicate the document, indicate the source. No permission is granted for commercial use of the Sourcebook. © Paul Halsall, July 1998 Attributions Title Image https://commons.wikimedia.org/wiki/File:Mao_Zedong_in_1959_(cropped).jpg Mao Zedong circa 1963 - неизвестный (unknown), Public domain, via Wikimedia Commons Adapted from: https://courses.lumenlearning.com/boundless-worldhistory/chapter/communist-china/ https://creativecommons.org/licenses/by-sa/4.0/ Public Domain, Library of Congress Publication
oercommons
2025-03-18T00:35:09.724551
Neil Greenwood
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/88080/overview", "title": "Statewide Dual Credit World History, The Catastrophe of the Modern Era: 1919-Present CE, Chapter 15: Cold War & Decolonization, Chinese Revolution", "author": "Anna McCollum" }
https://oercommons.org/courseware/lesson/88055/overview
The United States, 1939-1941: Neutrality? Overview The Arsenal of the Allies: The United States In December 1940, Franklin Roosevelt announced that the United States would be the “arsenal of democracy” during one of his fireside chats. In this speech, he urged Americans to support the democratic Allies in their fight against the Nazis—fascist oppressors who stood in direct opposition to democracy. Moreover, Roosevelt announced that the United States would provide goods and products essential to the Allies’ war effort. In large, the neutral United States rallied behind Roosevelt’s words. While most Americans were not in favor of getting entangled in another European war, the majority agreed that supplying the British with military products was essential. As the war in Europe increased in its scope and violence, so too did the industrial output on the American homefront. When the United States was drawn into World War II on the side of the Allies in 1941, every facet of society became devoted to helping the war effort. Indeed, the United States had become the world’s “arsenal of democracy.” Learning Objectives - Understand the significance of American industrial production during the World War II years. - Identify and explain the significance of the Lend-Lease Act. Key Terms / Key Concepts Bonds: a loan made to an investor; primary way of financing World War II in the United States Cash and Carry: 1939 American policy that allowed Allied countries to come to the United States and purchase military equipment with cash Lend-Lease Act: 1941 American program that agreed to “lend, lease, or otherwise dispose of” military and food aid to Allied nations Liberty Ships: commercial naval ships that were converted to be auxiliary warships during World War II War Production Board: American agency that governed war production during World War II The Role of the Neutral United States From the outset of World War II, Franklin Roosevelt was a staunch Anglophile. He admired many things about England and had developed a close relationship with the young and inexperienced king, George VI. Warm and engaging, Roosevelt was also paternal, and some historians describe his relationship with King George VI as almost father to son. Likewise, Roosevelt developed a close relationship with England’s future prime minister, Winston Churchill. When war erupted in Europe in the autumn of 1939, Roosevelt desperately wished to help the British. A master politician, he understood that the American public remembered too-well, the horrors of World War I. The people were overwhelmingly against becoming involved in another European war. For this reason, Roosevelt would have to become crafty in how he helped the Allies. Cash and Carry Following Germany’s invasion of Poland in 1939, Roosevelt passed the Fourth Neutrality Act. This gave the United States the ability to trade arms with foreign nations provided that the countries came to America to retrieve the arms and paid for them in cash. This policy was quickly dubbed Cash and Carry. From Roosevelt’s perspective, the act served two immediate purposes: it galvanized American production and businesses; and it also allowed the British to purchase military equipment from the United States to bolster their defenses and war effort. Lend-Lease Following the fall of France, and the Battle for Britain, Roosevelt was committed to helping the Allies even more. In March 1941, Roosevelt signed the Lend-Lease Act. This allowed the President “to lend, lease, sell, or barter arms, ammunition, food, or any ‘defense article’ or any ‘defense information’ to ‘the government of any country whose defense the President deems vital to the defense of the United States.'” In practicality, the Lend-Lease Act allowed the President to give military products and food to the Allies with little thought of their return or compensation. Through the Lend-Lease Act, the U.S. sent military equipment, including airplanes and heavy artillery, to England, Free France, the Soviet Union, and other Allied nations; however, most products went to England. Because of the Lend-Lease Act, skirmishes erupted in the Atlantic between U.S. cruisers and German U-boats because the Germans perceived the act as the unofficial alliance between the United States and England, as well as the Western Allies. In England, the act was hailed as helping save the British war effort. Planes, tanks, trucks, ammunition, helmets, and even food was sent to England. Similarly, the United States sent shipments of military equipment and food to the Soviet Union in the fall of 1941, following Germany’s invasion. By all accounts, the Lend-Lease program helped the Allies win the war. As Roosevelt predicted, the program also helped galvanize American industries and businesses. However, the United States received little compensation for the delivery of the military and food shipments. And very little of the military equipment was returned after the war. The United States Homefront during World War II Once the United States formally entered World War II in December 1941, the U.S. government took strong measures to convert the economy to meet the demands of war. And these demands imposed by the U.S. participation in World War II turned out to be the most effective measure in battling the long-lasting consequences of the Great Depression. Government programs continued to recruit workers; however, this time the demand was fueled not by the economic crisis, but by massive war demands. Production sped up dramatically, closed factories reopened, and new ones were established, which created millions of jobs in both private and public sectors as industries adjusted to the nearly insatiable needs of the military. Famously, under the “miracle man” Henry J. Kaiser, Liberty Ships were produced at the rate of one every three days after the attack on Pearl Harbor. Companies worked around the clock to produce war materials at a similar rate. By the end of 1943, two-thirds of the American economy had been integrated into the war effort. War Production Board The most powerful of all war-time organizations whose task was to control the economy was the War Production Board (WPB), established by President Roosevelt on January 16, 1942. Its purpose was to regulate the production of materials during World War II in the United States. The WPB converted and expanded peacetime industries to meet war needs, allocated scarce materials vital to war production, established priorities in the distribution of materials and services, and prohibited nonessential production. It rationed such commodities as gasoline, heating oil, metals, rubber, paper, and plastics. The WPB and the nation’s factories affected a great turnaround. Military aircraft production, which totaled 6,000 in 1940, jumped to 85,000 in 1943. Factories that made silk ribbons now produced parachutes, automobile factories now built tanks, typewriter companies converted to machine guns, undergarment manufacturers sewed mosquito netting, and a roller coaster manufacturer converted to the production of bomber repair platforms. The WPB ensured that each factory received the materials it needed to produce the most war goods in the shortest time. Between 1942 and 1945, WPB supervised the production of $183 billion worth of weapons and supplies, about 40% of the world's output of munitions. Rationing The greatest challenge of such massive war-related production was the permanent scarcity of resources. In response to it, the U.S. government, similarly to other states engaged in the war, introduced severe rationing measures. Tires were the first item to be rationed; there was a shortage of rubber for tires since the Japanese quickly conquered the rubber-producing regions of Southeast Asia. Throughout the war, rationing of gasoline was motivated by a desire to conserve rubber, as much as by a desire to conserve gasoline. A national speed limit of 35 miles per hour was imposed to save fuel and rubber for tires. Automobile factories stopped manufacturing civilian models by early February 1942, when they converted to producing tanks, aircraft, weapons, and other military products, with the United States government as the only customer. As of March 1, 1942, dog food could no longer be sold in tin cans; therefore, manufacturers switched to dehydrated versions. As of April 1, 1942, anyone wishing to purchase a new toothpaste tube, then made from metal, had to turn in an empty one. By June 1942, companies also stopped manufacturing metal office furniture, radios, phonographs, refrigerators, vacuum cleaners, washing machines, and sewing machines for civilians. Sugar was the first consumer commodity rationed, with all sales ended on April 27, 1942. Coffee was rationed nationally on November 29, 1942. By the end of 1942, ration coupons were used for nine other items. Typewriters, gasoline, bicycles, footwear, silk, nylon, fuel oil, stoves, meat, lard, shortening and food oils, cheese, butter, margarine, processed foods (canned, bottled, and frozen), dried fruits, canned milk, firewood and coal, jams, jellies, and fruit butter were rationed by November 1943. Scarce medicines, such as penicillin, were rationed by triage officers in the U.S. military during World War II. Many American families helped reduce the demands put on farmers by planting victory gardens. These private kitchen gardens were in homes, but also in public spaces such as parks. They supplemented, rather than replaced the fruits, vegetables, and herbs consumed by Americans. Moreover, they helped increase patriotism among families and the community. Labor The unemployment problem caused by the Great Depression ended with the mobilization for war, hitting an all-time low of 700,000 in fall 1944. Greater wartime production created millions of new jobs, while the draft reduced the number of young men available for civilian jobs. There was a growing labor shortage in war centers, with sound trucks going street by street begging for people to apply for war jobs. So great was the demand for labor that millions of retired people, housewives, and students entered the labor force, lured by patriotism and wages. The shortage of grocery clerks caused retailers to convert from service at the counter to self-service. Before the war, most groceries, dry cleaners, drugstores, and department stores offered home delivery service, but the labor shortage, as well as gasoline and tire rationing, caused most retailers to stop delivery. They found that requiring customers to buy their products in person increased sales. Because of the unprecedented labor demands, groups that were historically excluded from the labor market, particularly African Americans and women, received access to jobs. However, even the existing circumstances did not end discrimination, especially against the workers of color. Financing the War As the U.S. entered World War II, Secretary of the Treasury Henry Morgenthau, Jr. began planning a national defense bond program to finance the war. Morgenthau advocated for a voluntary loan system and began planning a national defense bond program in the fall of 1940. The intent was to unite the attractiveness of the baby bonds that had been implemented in the interwar period with the patriotic element of the Liberty Bonds from the first World War. Bonds became the main source of war financing, covering what economic historians estimate to be between 50% and 60% of war costs. The Bond System The War Finance Committee was placed in charge of supervising the sale of all bonds, and the War Advertising Council promoted voluntary compliance with bond buying. The government appealed to the public through popular culture. Contemporary art was used to help promote the bonds, such as the Warner Brothers theatrical cartoon, “Any Bonds Today?” Norman Rockwell’s painting series, “The Four Freedoms,” toured in a war bond effort that raised $132 million. Bond rallies were held throughout the country with celebrities, usually Hollywood film stars, to enhance the bond advertising effectiveness. The Music Publishers Protective Association encouraged its members to include patriotic messages on the front of their sheet music, like “Buy U.S. Bonds and Stamps.” Over the course of the war, 85 million Americans purchased bonds, totaling approximately $185.7 billion. Global Impact The United States in World War II was not only the “arsenal for democracy,” but also the “breadbasket for democracy.” German occupation had caused much of the Soviet Union to be malnourished and underfed. Even Joseph Stalin confessed that American efforts in the war had helped the Soviet Union enormously. By the end of the war, the United States had shipped nearly 18,000,000 tons of products to the Soviet Union alone. And tens of millions of dollars worth of equipment to England, the Soviet Union, Free France, China, and other Allied countries. From 1939-41, the United States remained technically and legally, neutral. But its actions suggested that it was never truly neutral, and always on the side of the Allies. Attributions Images courtesy of Wikimedia Commons Boundless U.S. History “Preparing the Economy for War” https://courses.lumenlearning.com/boundless-ushistory/chapter/preparing-the-economy-for-war/
oercommons
2025-03-18T00:35:09.763828
Neil Greenwood
{ "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "url": "https://oercommons.org/courseware/lesson/88055/overview", "title": "Statewide Dual Credit World History, The Catastrophe of the Modern Era: 1919-Present CE, Chapter 14: The World Afire: World War II, The United States, 1939-1941: Neutrality?", "author": "Anna McCollum" }