content
stringlengths
275
370k
Another new endemic bird species has been discovered at our Cordillera Azul forest conservation project in Peru. Endemic means that it only exists in one geographic location in the world, meaning this species has evolved to only exist in the unique forests of the Cordillera Azul National Park. This follows from previous discoveries of two new fauna species and 12 plant species found in Cordillera Azul. Biological monitoring of the unique, remote and vastly unexplored landscape of Cordillera Azul is a key part of our project, and one of the activities which is funded by the support of our clients through climate finance. In July 2016, a group of ornithologists arrived into the coffee-growing town of Flor de Café located in the picturesque outlying Andean ridges of Cordillera Azul. This town has become a destination for ornithologists and birdwatchers due to the discovery of a different distinctive species in the area some twenty years earlier, the Scarlet-banded Barbet. During an expedition, bird watcher Josh Beck spotted a strange, ground walking antbird. Antbirds are a family of birds in the American tropics that are known for following ants. The bird was documented with a sound recording and it quickly transpired to be a new species to science. While there is still a lot to learn about the Cordillera Azul Antbird (Myrmoderus eowilsoni), here is what a follow-up expedition discovered. Its closest relative is the Ferruginous-backed Antbird (of which the nearest populations are about 1,500 km to the east in lowland forests of Brazil), it eats insects, the males and females sing different songs, it lives in pristine understory of humid forest, and its future near Flor de Café is very grim. Read the formal description of this species in ‘The Auk: Ornithological Advances’. Our Cordillera Azul Antbird was found in the surroundings of the Plataforma hamlet, in the mountains between the Biabo river and the Ponsillo river; this precise ridge is the only known location of the species. Researchers hope that further expeditions can be carried out within the park to locate more populations that likely exist inland. Its diminishing home Worryingly, habitat destruction is advancing rapidly on the bird’s distinctive forest due to the expansion of coffee plantations. Chainsaws are an overwhelming component of the soundscape around town. Such is the nature of this relentless deforestation that locals were even asked to delay cutting activities so that a better voice recording of the Antbird could be gathered. Hope for the Antbird Flor de Café is located just 9 kilometres from the border of the Cordillera Azul National Park with over 13,500km2 of pristine habitat. Together with our partners and through climate finance from our clients, we are working to protect this area. Read more about our forest conservation and environmental education activities. From both an ornithological and general ecological perspective, Cordillera Azul remains mysterious and tantalizing. Perhaps it holds a new hummingbird or tody-tyrant? Regardless of any future discoveries to be made in the park, it is our hope that the new Antbird brings attention to the incredibly biodiverse and distinctive ecosystems of the region. Hopefully this discovery serves as a potent reminder of how far we still have to go in cataloguing the diversity of life on this planet. Environmental education and conservation work is critical not only in the Plataforma hamlet, but also the entire Amazon basin, to ensure that this and all the yet undiscovered species endemic to the region continue to surprise us. You can support this project and help more species, like the Antbird, be both saved and discovered through protecting the forest habitat of Cordillera Azul.
Curious Kids: Where do black holes lead to? - Posted by The Conversation | - Monday 9 July 2018, 10:48 AM (EST) The Conversation is asking kids to send in questions they'd like an expert to answer. Merion from Fremantle, Western Australia, wants to know where black holes lead to. An expert in astrophysics explains. Hi Merion. First, let’s start off with what black holes are. Black holes can form when a massive star dies. Stars have a lot of mass which means there is a lot of gravity pulling in on the star. Gravity is the same force that keeps you on Earth so you don’t float into space! These stars are also made up of very hot gas which lets off a lot of heat. This creates a force which pushes on the star from the inside out. Normally the pull from gravity and the push from the heat balance each other out. But, as the star gets older, it burns up all of the fuel and there isn’t anything left to push out anymore. Now gravity takes over and all of the mass of the star falls in on itself into a single point. This is what we call a black hole. You will never be able to escape a black hole Because black holes are made up of a lot of mass squished into a very small area of space (in science speak we say black holes are very dense) they create a lot of gravity. This pulls in anything that gets too close. The pull they create is so strong that if you get too close to a black hole – even if you are travelling away from it at the fastest speed it is possible to go – you will never be able escape. This is what astronomers call the event horizon. Once you are inside the event horizon of the black hole you will never be able to leave. Black holes were given that name because if you were to take a picture of one, you wouldn’t be able to see anything. No light would be able to escape the black hole and make it to the camera (and after all, all a camera does is record light). You would just see a picture of the universe with a dark circle around where the black hole sits. Sadly, it is really hard to get a camera good enough to take pictures like that. Instead, astronomers study black holes by looking at the stuff that is getting sucked into the black holes, before it gets too close and goes past the event horizon. There is no way for us to see what happens once you get inside. So, where do they lead to? Now to the big question: what happens once you go into a black hole and past the event horizon? The answer is that we don’t actually know yet. We are still trying to figure that out! One idea is that black holes form things called wormholes. You can read this Curious Kids article to find out all about wormholes. These wormholes act as tunnels between two different parts of the space. This means that you could step into a black hole and end up in a completely different part of our universe. You might even end up in a different universe! Astronomers have spent a lot of time trying to describe how wormholes could form and work. We won’t know for sure if that is really what happens once you go through a black hole though until we figure out a way to see it happen. Maybe one day you will become a scientist and help us find these answers. Your excellent question shows you are on the right track. Hello, curious kids! If you have a question you’d like an expert to answer, ask an adult to send it to email@example.com. Make sure they include your name, age (and, if you want to, which city you live in). All questions are welcome – serious, weird or wacky! 26 November 2018, 03:31 PM (EST) You say that there is a theory that black holes lead to wormholes. I would've thought it obvious that they just compress things that go into the black hole for more mass, which in turn makes more gravity.
Social media platforms like Facebook, Twitter, Snapchat, and Instagram are exclusively used for taking a break from academics, right? That’s what we believe. The fact is, lots of students are leveraging social media as a study tool. So, parents, you shouldn’t feel bad that the Spectrum phone are making your kids stick to social media more than ever. There are lots of practical ideas that can help students use social media for personalized and collaborative studying. Give these 10 tips a read to find how social media can help students with their study: Tip #1: Create A Community Students are often challenged by a course assignment. With social media, they can create a community to make studying and communicating efficient for all students. For instance, they can create a group for the entire class on Facebook. They can use this group to collaborate and share study tips. It’s even okay to invite the class professor to follow the group conversation and give their input. These study networks are not limited to students from one school only. Tip #2: Model Intellectual Tolerance It also helps students how to relate to others who not just look different but act different and have different ideas than their own. Tip #3: Organize All Learning Resources These tools help keep information organized and accessible. If the course documents are not posted online, Dropbox or Google Drive can be used to gather study materials. Other than this, resources can also be shared using collection building tools like Tumblr or Pinterest. Tip #4: Create Challenges on Social Media A biology teacher from Bergen County proposed a challenge to his students. They were required to have a debate on Twitter on “Meiosis” using a specific hashtag. The students needed to have enough knowledge about meiosis to come up with 140 characters. This was an opportunity for them to learn and have fun at the same time. The professor believed that no matter how we resist, the technological trends will eventually become a standard. So, why not embrace them already? Tip #5: Record Missed Lectures Everybody agrees that video is a great way of complementing lessons. There was a time when a missed class meant you missed the opportunity to learn. Now, teachers can record the lectures and share the videos with the entire class using Spectrum TV packages. These videos can be made accessible to all along with some extra learning materials. Tip #6: Improve Writing Skills with Blogging Students are now less likely to write their thoughts in their journals. Instead, they have social media to post what’s going on their mind. But teachers can introduce blogs into the classroom and encourage students to express themselves. Blogging not just improves their writing skills, but encourages them to express themselves creatively. It has a positive impact on their intelligence overall. Tip #7: Team Building with Projects It is extremely useful for starting team projects. Building soft skills in groups is already stressed upon a lot. sites can help with that by letting students hold meetings in real time to work on their projects. Working in teams, dividing the workload and completing the assignment within the deadline, makes them learn how to share ideas, collaborate, and contribute responsibly. Tip #8: Opportunity to Learn About Education Social media sites are not just a way to connect with people. They are a bunch of platforms with links to other resources. Apart from music and games, students can also find answers to questions. For instance, they can find data and survey results on a variety of topics shared in different groups. They can also search these groups on the basis of their interest. Social networks also provide an opportunity to learn about education itself. There are students everywhere who have the desire to study abroad. Social media can provide useful information on student grants, scholarships, and study programs from different countries. Tip #9: Distant Learning Opportunities Educators are looking for new approaches to attract students to distance learning. They have found social media integration useful for it. Students are likely to visit learning programs with integrated social media platforms better. That’s why many colleges and universities have started to encourage learning through social media. Tip #10: Attention To Reading Another positive side of using social media in education is the desire to read without any obligation. Teenagers usually don’t like reading. But, since social media bombards you with messages, comments, news, and articles, students are seen reading them without any hesitation. They are seen making effort to keep up with the latest trends because they don’t want to look bad among their friends. Pretty soon, it will be impossible to ignore the influence of social media on education. Social media isn’t all detrimental, it does actually have a positive impact on education. Of course, teachers should follow a clear set of do’s and don’ts for integrating it in the classroom. It must be based on sound learning and it must support the curriculum. So go ahead, upgrade your internet speed by choosing one of the Spectrum Internet plans. But do make sure your kids are integrating social media in their education the right way.
About This Chapter Natural Resources & Environmental Impact - Chapter Summary No matter your current knowledge of natural resources and human environmental impact, this chapter's engaging video lessons can help strengthen your grasp. Discover or get reacquainted with the basics of pollution, the carbon cycle, fossil fuels and more. Upon completion of this chapter, you will be ready to: - Define and list types of natural resources - Discuss the scarcity and allocation of land and natural resources - Offer details about the carbon cycle and long-term carbon storage - Exhibit knowledge of ecological conservation and the impact of humans on the environment - Differentiate between physical, chemical and biological pollution - Explain how invasive and introduced species alter ecological balance - Share facts about fossil fuels, greenhouse gases and global warming As you navigate the lessons in this chapter, watch them as short videos that average 8 minutes each or enjoy the verbatim transcripts. If you'd like to find out how well you understand main lesson concepts, don't hesitate to take accompanying multiple-choice quizzes. Reach out to our subject-matter experts with any questions you have about specific topics presented in this chapter. Around-the-clock mobile access lets you study anytime via your preferred smartphone or tablet! 1. What Are Natural Resources? - Definition & Types Natural resources are materials provided by the Earth that humans can use to make more complex (human-made) products. In this lesson, you will learn some examples of natural resources and how to classify them. 2. Land & Natural Resources: Scarcity & Allocation This lesson explores the importance of two very unique resources to economic development: land and natural resources. It explains how land can last forever, while some natural resources can run out. That said, both require expert management. 3. The Carbon Cycle and Long-Term Carbon Storage All living organisms have a role in the carbon cycle. Do you know you understand how humans, animals and plants use carbon? This lesson will introduce you to the carbon cycle and explain how it functions on a global scale. 4. Human Environmental Impact & Ecological Conservation The Earth has only so much space for us and our non-human neighbors. In this lesson, we will learn about how humans impact the environment, what causes habitat fragmentation, and what ecological conservation can do to help. 5. Pollution: Physical, Chemical & Biological Pollution is the presence of unwanted substances in an environment. It is often the result of human interference. Learn about physical, chemical and biological pollutants and see examples of each. 6. Fossil Fuels, Greenhouse Gases, and Global Warming In this video lesson, you'll learn what roles fossil fuels and greenhouse gases play in global warming, as well as what life on Earth can expect due to rising carbon dioxide levels within Earth's atmosphere. 7. How Introduced and Invasive Species Alter Ecological Balance What happens to your block when a new neighbor moves in? Something changes, right? Now think about that on an ecological scale: what happens to an environment when a new SPECIES moves in? Earning College Credit Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Transferring credit to the school of your choice Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you. Other chapters within the CLEP Biology: Study Guide & Test Prep course - Scientific Principles - Review of Inorganic Chemistry For Biologists - Introduction to Organic Chemistry - Cell Biology - How Enzymes Work - Basics of DNA & RNA - Process of DNA Replication - The Transcription and Translation Process - Basics of Gene Mutations - Basics of Metabolic Biochemistry - Overview of Cell Division - Plant Biology - Plant Reproduction and Growth - Physiology I: The Circulatory, Respiratory, Digestive, Excretory, and Musculoskeletal Systems - Physiology II: The Nervous, Immune, and Endocrine Systems - Animal Reproduction and Development - Biology of Genetics - Principles of Ecology - Speciation & Evolution - The Study of Life On Earth - Classification of Organisms Overview - Social Biology - Analyzing Scientific Data - CLEP Biology Flashcards
Once upon a time, stories were used to teach. They were used to deliver powerful meaning and messages. They were fables, bedtime stories and fairy tales. Stories were told around a campfire or more recently, read in a gripping novels or watched in an engaging animation or film. In today’s learning environment, programs can be “massive data dumps” ignoring the value and impact of a good story. Learners are exposed to massive amounts of disengaging content that they can’t connect to. They end up unable to see why and how the content applies to them or their jobs. “Stories have the ability to encapsulate, into one compact package, information, knowledge, context and emotion”. (Norman, 1993)That is why you will remember your favourite bedtime story as a child yet struggle to remember what you did at work on Monday. So how do stories improve the outcomes and returns of your training material? When you use stories as part of your training: - It grabs and retains learner attention - The learning becomes fun as opposed to meeting a list of objectives - It establishes the content flow and engages learners at every point - Learners will remember the concepts covered in the course as they always remember a good story - Provide plenty of opportunities for your learners to tell their story during your training. Telling their story allows them to consider how what you are teaching applies to them. - Use scenarios and allow opportunities for your learners to practise what you are teaching. Give them options to choose from allowing them to see the consequences of their choices - Use examples of other people that have applied your teachings and succeeded - Ensure the technology you are using effectively highlights and priorities storytelling - Practise your stories and make sure they are relevant, entertaining and appealing. Just because you are using stories doesn’t guarantee you are using them to their full potential. Sharing our Knowledge through LIVE ONLINE VIRTUAL EVENTS
Is handwriting a skill that you should be teaching your child? In today’s world, kids are used to touch screens and typing their letters instead of focusing on handwriting skills. I believe handwriting is crucial because it improves children’s: - motor skills Typing and using technology is our new normal BUT handwriting is STILL essential to teach our kids. Handwriting can be tricky to teach because kids need to combine the skills of holding a pencil correctly and creating the correct letter formation. Quick Links: 15 Handwriting Activities for Preschoolers How can you make learning handwriting fun but simple, so your child is interested in practicing? I recommend doing these THREE things! - Try one activity once or twice a week - Plan engaging activities to practice their skills - Stay positive with your children! From personal experience, I know it can be hard to think of helpful activities to do with your child that won’t bore them after a few minutes. Luckily, I have developed a list of (15) FUN Handwriting Activities YOU can implement today! But, before we get started talking about the fun activities that you can do, it is essential to note the first step in handwriting is showing your child how to hold a pencil correctly. Pencil Grasp Development Before kids start practicing letter development, they have to understand how to hold a pencil correctly. Believe it or not, at 3 months, your baby is working on skills to be able to grasp objects that will be able to help them later on when learning how to grasp a pencil! When your son or daughter is in kindergarten, they use a dynamic tripod grasp which is most similar to what adults use. Take a look at this Pencil Grip Site to get some more information! Handwriting Worksheet Activities (7 total) The best way to develop proper handwriting skills is for your child to get a pencil in their hand and practice writing on a piece of paper! Now, it’s not very fun to practice writing on a blank piece of paper. To get your child interested in practicing their skills, you have to make it exciting for them to want to do it. Pinterest and Teachers Pay Teachers have an ENDLESS amount of activities to choose from. Here are some of the favorites that I have come across that worked for us! Activity #1: Box It Up Activities #1 through #5 are all from the shop, Super God Not Super Mom. These worksheets caught my eye because of the images, colors, and games. I knew my daughter would be excited to do these worksheets. I gave her the choice of which one she wanted to do first, and she chose Box It Up. This worksheet focuses on having children write an uppercase letter in the box without going outside of the lines. Writing inside the lines is one of the hardest things for kids to do when learning how to write. Personally, this was one of my favorites because my daughter struggles with making her letters the correct size. Activity #2: Rainbow Roll Add some color into learning how to write by using letting your preschooler use colored pencils! Even adding something as simple as dice into an activity worked like a charm! Get one piece of dice out and have them roll it. Depending on what color they choose first, they will trace the letter in that color as many times that it shows on the dice. You will repeat this process until the worksheet is completed! Want to add some math into this activity? Get two pieces of dice out and work with your child on how to add the two numbers together to see how many times they should trace each letter! Activity #3: Spin The Letter Wheel We used a spinner, and it worked great for us! Take a look at how we did this worksheet in action by watching this video. Activity #4: Lovely Lines This is a more traditional worksheet, which you need to do with kids, so they do understand how to write correctly. Kids get so enthralled into the game aspect of the worksheet that they may not form their letters correctly because they want to see what will happen next! It’s important to incorporate both types of worksheets into your teaching methods. The Lovely Lines activity focuses on proper letter development and spacing. Activity #5: Final Four The final four worksheet wraps up the series for this bundle! It’s a quick review of all the lessons that you previously did to see their progress! Activity #6: Pen Control and Tracing Who wants to spend endless amounts on worksheets and workbooks? I sure don’t! This is one reason my preschooler loves using this Pen Control and Tracing Book; it is a dry erase book that allows you to do these activities over and over. *One tip that I would give is to wipe off the workbook when your child is done with it because the dry erase marker doesn’t entirely come off if you let it sit there for a week without using it again.* Here’s a look at some of the activities inside this workbook! Activity #7: Reading and Math Jumbo Workbook Out of all the worksheets that I have mentioned, the Reading and Math Jumbo Workbook is my favorite. The workbook is HUGE, so it has handwriting, reading, and math activities! Non-Worksheet Activities (8 total) If you ask your preschooler what they would rather play with pencils or cookie sprinkles what do you think they’d choose? Worksheet activities are critical to proper handwriting development, but you shouldn’t do them every day. Here are some fun activities that you can do on your off days! Activity #8: Salt Tray Writing The best part about salt trays is that you can use ingredients that you most likely have at home! I love baking, so I have so many different sprinkles for cookies stored away, so that’s what we decided to use first. If you don’t have one of these trays, I recommend them! They have been used so much with all of the art projects or play-doh activities that we have done! You do need to use a whole container of sprinkles on your tray, cookie sheet, etc. for it to work the best. Depending on your child’s writing ability, you can add a paper underneath the salt/sugar, so when they begin to make the letter, they can look for lines to trace it correctly. We also tried just using sugar, and that worked well too! The thing to remember about this activity is that your child’s writing won’t be perfect because they are using their fingers and not a pencil, so make sure to encourage them to try their hardest and most of all have fun! Activity #9: Chalk Board Writing The first thing that we did to update our house when we first moved in was put a chalkboard on one of the walls of our playroom! It’s super easy to do this! All you need is chalk paint. You need to put on a few coats for it to work and look the best! My kids love using the chalkboard, so whenever I incorporate learning with it, I know it’ll be a hit. I drew a set of lines that I wanted her to draw the letters within, and I did the first letter to demonstrate to her how to do it correctly! Activity #10: Play-Doh Letters Every child I have met and taught LOVES Play-Doh. There are several ways that you can use Play-Doh for pre-writing activities. 1. Individual letters If your child is younger, this is the best activity for them. I downloaded some for FREE using this website: Individual Letter Play-Doh Mat. When I was showing my daughter how to do this, I had her roll the play-doh into a snake-like formation to create the letters. 2. Sight Words If you have a school-aged child, this sight word activity is perfect for them. I downloaded these for FREE using the following site: Sight Word Play-Doh Mat. This is an enticing way to get your kids excited about reviewing or learning sight words. RELATED ARTICLE: 7 Easy Ways To Teach Sight Words To Preschoolers. Activity #11: Shaving Cream Fun Want to win the best parent of the day award? Tell your kids that they are allowed to play with shaving cream today! You can have them practice writing uppercase/lowercase letters, writing their name and sight words if they are starting to learn how to read as well. Again, this tray has been a lifesaver for me with all of the activities that I do with my kids because it keeps everything inside the tray and it doesn’t get on my kitchen table or desk! Activity #12: Painting with Q-Tips/Do-A-Dots I have never met a kid that didn’t like to paint! And using different tools to paint ups the ante even more! To do this activity, you’ll need: Starting to work on forming the letters in an engaging way will get your child interested in handwriting! Activity #13: Magic Board Writing Does anyone feel like they struggle to get their child to do any school-related work? As a parent, I feel like we will struggle with this daily when it comes time to do homework after school. I like both of these boards because I can demonstrate how to make the letter, erase my work then she can try! She’s instantly is engaged! Well, at least for a few minutes. 🙂 Activity #14: Glitter Glue Letters! Out of all the non-worksheet activities that I have listed, this was my daughter’s favorite! Glitter glue is so fun to play with, but you have to make sure you buy the right brand, or it’s too hard for kids to get the glitter glue out. We had great success using Elmer’s brand of glitter glue! With this activity, you can have them practice making shapes, lines, numbers, first/last name, and of the course the alphabet. All I did was write her name on a blank sheet of paper, and she used the glitter glue to outline the letters the best she could! Activity #15: Bead Writing This activity can be done in a few different ways! If you have a younger child just beginning to work on their handwriting skills, you can make letters out of Play-doh cut-outs. You can make the letters for them; then they can place the beads on the play-doh to form the letter! You can either draw a letter on a piece of paper or find a print out of a traceable letter that you want them to work on, and they can place the beads on the letter. Older kids will enjoy attempting to create the letters on their own! Again, this tray saves the day from a gigantic mess! 6 Common Mistakes Parents Frequently Encounter. Did anyone else’s parents keep some of their work from when they were little? Well, mine did, and YIKES my handwriting was a mess when I was little. Handwriting is tough for kids because many small details go into making a letter. Here are 6 common mistakes you may see from your child when they are learning how to write 1. Forming letters 2. Making letters the right size 3. Holding the pencil correctly 4. Keeping the paper steady with one hand while writing with the other 5. Spacing letters and words 6. Maintaining proper arm position when writing Ugh, that’s a lot. Kids handwriting will not be perfect. Keep practicing and encouraging them to try their best! If you notice some of the issues common mistakes in your child’s handwriting, try some of the activities that are specific to the problem. For example, my daughter has trouble with the sizing and spacing of her letters. Activity #1 Box It Up and Activity #4 Lovely Lines mentioned above worked on those skills. My recommendation is to start handwriting practice early! In kindergarten, kids are expected to be able to write their first and last name, so it’s never too early to start preparing them. Final Thoughts and Conclusion Learning how to write is a life skill that your child will use when they are writing you cute notes, writing letters to Santa, and writing essays in school. Handwriting is important to our everyday lives, but it can be challenging for some kids to create legible letters. Thankfully, by checking out my list of (15) handwriting activities, you and your child will be off to a great start to their handwriting journey! If you decide to do one of these activities at home with your child, I’d love to hear how it went! What are some of the writing activities that you do at home with your child that have been successful? Please share them with us by leaving a comment below.
Approaches to Learning Constructivism as a paradigm or worldview posits that learning is an active, constructive process. The learner is an information constructor. People actively construct or create their own subjective representations of objective reality. New information is linked to prior knowledge, thus mental representations are subjective. Originators and important contributors: Vygotsky, Piaget, Dewey, Vico, Rorty, Bruner Keywords: Learning as experience, activity and dialogical process; Problem Based Learning (PBL); Anchored instruction; Vygotsky’s Zone of Proximal Development (ZPD); cognitive apprenticeship (scaffolding); inquiry and discovery learning. A reaction to didactic approaches such as behaviourism and programmed instruction, constructivism states that learning is an active, contextualized process of constructing knowledge rather than acquiring it. Knowledge is constructed based on personal experiences and hypotheses of the environment. Learners continuously test these hypotheses through social negotiation. Each person has a different interpretation and construction of knowledge process. The learner is not a blank slate (tabula rasa) but brings past experiences and cultural factors to a situation. NOTE: A common misunderstanding regarding constructivism is that instructors should never tell students anything directly but instead, should always allow them to construct knowledge for themselves. This is actually confusing a theory of pedagogy (teaching) with a theory of knowing. Constructivism assumes that all knowledge is constructed from the learner's previous knowledge, regardless of how one is taught. Thus, even listening to a lecture involves active attempts to construct new knowledge. Vygotsky's social development theory is one of the foundations for constructivism. Citation: Constructivism. (2016, March 05). Retrieved from: http://www.learning-theories.com/constructivism.html Approaches rooted in constructivism: Cognitive Apprenticeship is a theory that attempts to bring tacit processes out in the open. It assumes that people learn from one another, through observation, imitation and modelling. Originator: Collins, Brown and Newman Key Terms: Modelling, coaching, scaffolding, articulation, reflection Cognitive Apprenticeship Around 1987, Collins, Brown, and Newman developed six teaching methods — modelling, coaching, scaffolding, articulation, reflection and exploration. These methods enable students to cognitive and metacognitive strategies for "using, managing, and discovering knowledge" Experts (usually teachers or mentors) demonstrate a task explicitly. New students or novices build a conceptual model of the task at hand. For example, a math teacher might write out explicit steps and work through a problem aloud, demonstrating her heuristics and procedural knowledge. During Coaching, the expert gives feedback and hints to the novice. Scaffolding the process of supporting students in their learning. Support structures are put into place. In some instances, the expert may have to help with aspects of the task that the student cannot do yet. McLellan describes articulation as (1) separating component knowledge and skills to learn them more effectively and, (2) more common verbalizing or demonstrating knowledge and thinking processes in order to expose and clarify them. This process gets students to articulate their knowledge, reasoning, or problem-solving process in a domain" (p. 482). This may include inquiry teaching (Collins & Stevens, 1982), in which teachers ask students a series of questions that allows them to refine and restate their learned knowledge and to form explicit conceptual models. Thinking aloud requires students to articulate their thoughts while solving problems. Students assuming a critical role monitor others in cooperative activities and draw conclusions based on the problem-solving activities. Reflection allows students to "compare their own problem-solving processes with those of an expert, another student and ultimately, an internal cognitive model of expertise" (p. 483). A technique for reflection could be to examine the past performances of both expert and novice and to highlight similarities and differences. The goal of reflection is for students to look back and analyse their performances with a desire for understanding and improvement towards the behaviour of an expert. Exploration involves giving students room to problem solve on their own and teaching students exploration strategies. The former requires the teacher to slowly withdraw the use of supports and scaffolds not only in problem solving methods, but problem setting methods as well. The latter requires the teacher to show students how to explore, research, and develop hypotheses. Exploration allows the student to frame interesting problems within the domain for themselves and then take the initiative to solve these problems. For more information, see: • Collins, A., Brown, J. S., & Newman, S. E. (1987). Cognitive apprenticeship: Teaching the craft of reading, writing and mathematics (Technical Report No. 403). BBN Laboratories, Cambridge, MA. Centre for the Study of Reading, University of Illinois. January, 1987. Discovery Learning is a method of inquiry-based instruction, discovery learning believes that it is best for learners to discover facts and relationships for themselves. Originator: Jerome Bruner (1915-) Keywords: Inquiry-based learning, constructivism Discovery Learning (Bruner) Discovery learning is an inquiry-based, constructivist learning theory that takes place in problem solving situations where the learner draws on his or her own past experience and existing knowledge to discover facts and relationships and new truths to be learned. Students interact with the world by exploring and manipulating objects, wrestling with questions and controversies, or performing experiments. As a result students may be more likely to remember concepts and knowledge discovered on their own (in contrast to a transmissionist model). Models that are based upon discovery learning model include: guided discovery, problem-based learning, simulation-based learning, case-based learning, incidental learning, among others. Proponents of this theory believe that discovery learning has many advantages, including: • encourages active engagement • promotes motivation • promotes autonomy, responsibility, independence • the development of creativity and problem solving skills. • a tailored learning experience Critics have sometimes cited disadvantages including: • creation of cognitive overload • potential misconceptions • teachers may fail to detect problems and misconceptions The theory is closely related to work by Jean Piaget and Seymour Papert. For more information, see: • Bruner, J.S. (1967). On knowing: Essays for the left hand. Cambridge, Mass: Harvard University Press. Citation: Discovery Learning (Bruner). (2016, March 05). Retrieved from: http://www.learning-theories.com/discovery-learning-bruner.html Social Development Theory argues that social interaction precedes development; consciousness and cognition are the end product of socialization and social behaviour. Originator: Lev Vygotsky (1896-1934). Key terms: Zone of Proximal Development (ZPD), More Knowledgeable Other (MKO) Vygotsky's Social Development Theory Vygotsky's Social Development Theory is the work of Russian psychologist Lev Vygotsky (1896-1934), who lived during Russian Revolution. Vygotsky's work was largely unknown to the West until it was published in 1962. Vygotsky’s theory is one of the foundations of constructivism. It asserts three major themes: 1. Social interaction plays a fundamental role in the process of cognitive development. In contrast to Jean Piaget's understanding of child development (in which development necessarily precedes learning), Vygotsky felt social learning precedes development. He states: "Every function in the child's cultural development appears twice: first on the social level, and later, on the individual level; first, between people (interpsychological) and then inside the child (intrapsychological)." (Vygotsky, 1978). 2. The More Knowledgeable Other (MKO). The MKO refers to anyone who has a better understanding or a higher ability level than the learner, with respect to a particular task, process, or concept. The MKO is normally thought of as being a teacher, coach, or older adult, but the MKO could also be peers, a younger person, or even computers. 3. The Zone of Proximal Development (ZPD). The ZPD is the distance between a student's ability to perform a task under adult guidance and/or with peer collaboration and the student's ability solving the problem independently. According to Vygotsky, learning occurred in this zone. Vygotsky focused on the connections between people and the sociocultural context in which they act and interact in shared experiences (Crawford, 1996). According to Vygotsky, humans use tools that develop from a culture, such as speech and writing, to mediate their social environments. Initially children develop these tools to serve solely as social functions, ways to communicate needs. Vygotsky believed that the internalization of these tools led to higher thinking skills. Applications of the Vygotsky Social Development Theory Many schools have traditionally held a transmissionist or instructionist model in which a teacher or lecturer ‘transmits* information to students. In contrast. Vygotsky’s theory promotes learning contexts in which students play an active role in learning. Roles of the teacher and student are therefore shifted, as a teacher should collaborate with his or her students in order to help facilitate meaning construction in students. Learning therefore becomes a reciprocal experience for the students and teacher. For more information, see: • Luis C. Moll's book: L.S. Vygotsky and Education (Routledge Key Ideas in Education). An accessible, introductory volume that provides a good summary of Vygtoskian core concepts, including the sociocultural genesis of human thinking, a developmental approach to studying human thinking, and the power of cultural mediation in understanding and transforming educational practices. Well written and worth a look. Citation: Social Development Theory (Vygotsky). (2016, March 05). Retrieved from: http://www.learning-theories.com/vygotskys-social-learning-theory.html Humanism is a paradigm/philosophy/pedagogical approach that believes learning is viewed as a personal act to fulfil one's potential. Key proponents: Abraham Maslow, Carl Rogers, Malcolm Knowles Key terms: self-actualization, teacher as facilitator, affect Humanism Humanism, a paradigm that emerged in the 1960s, focuses on the human freedom, dignity, and potential. A central assumption of humanism, according to Huitt (2001), is that people act with intentionality and values. This is in contrast to the behaviourist notion of operant conditioning (which argues that all behaviour is the result of the application of consequences) and the cognitive psychologist belief that the discovering knowledge or constructing meaning is central to learning. Humanists also believe that it is necessary to study the person as a whole, especially as an individual grows and develops over the lifespan. It follows that the study of the self, motivation, and goals are areas of particular interest. Key proponents of humanism include Carl Rogers and Abraham Maslow. A primary purpose of humanism could be described as the development of self-actualized, autonomous people. In humanism, learning is student centred and personalized, and the educator's role is that of a facilitator. Affective and cognitive needs are key, and the goal is to develop self-actualized people in a cooperative, supportive environment. Related theories include: Experiential Learning (Kolb), Maslow's Hierarchy of Needs, and Facilitation Theory (Rogers). For more information, see: • DeCarvalho, R. (1991). The humanistic paradigm in education. The Humanistic Psychologist, 79(1), 88-104. • Huitt, W. (2001). Humanism and open education. Educational Psychology Interactive. Valdosta, GA: Valdosta State University. Retrieved September 11.2007, from the URL: http://chiron.valdosta.edu/whuitt/col/affsys/humed.html. • Rogers, C., & Freiberg, H.J. (1994). Freedom to learn (3rd Ed.). New York: Macmillan. Citation: Humanism. (2016, March 05). Retrieved from: http://www.learning-theories.com/humanism.html Examples of the Humanist approach: According to John Keller's ARCS Model of Motivational Design Theories, there are four steps for promoting and sustaining motivation in the learning process: Attention, Relevance, Confidence, Satisfaction (ARCS). Originator: John Keller Key terms: Attention, Relevance, Confidence, Satisfaction (ARCS) ARCS Model of Motivational Design (Keller) • Keller attention can be gained in two ways: (1) Perceptual arousal - uses surprise or uncertainly to gain interest. Uses novel, surprising, incongruous, and uncertain events; or (2) Inquiry arousal - stimulates curiosity by posing challenging questions or problems to be solved. • Methods for grabbing the learners' attention include the use of: • Active participation -Adopt strategies such as games, roleplay or other hands-on methods to get learners involved with the material or subject matter. • Variability - To better reinforce materials and account for individual differences in learning styles, use a variety of methods in presenting material (e.g. use of videos, short lectures, mini-discussion groups). • Humor -Maintain interest by use a small amount of humor (but not too much to be distracting) • Incongruity and Conflict - A devil's advocate approach in which statements are posed that go against a learner's past experiences. • Specific examples - Use a visual stimuli, story, or biography. • Inquiry - Pose questions or problems for the learners to solve, e.g. brainstorming activities. • Establish relevance in order to increase a learner's motivation. To do this, use concrete language and examples with which the learners are familiar. Six major strategies described by Keller include: • Experience - Tell the learners how the new learning will use their existing skills. We best learn by building upon our preset knowledge or skills. • Present Worth - What will the subject matter do for me today? • Future Usefulness - What will the subject matter do for me tomorrow? • Needs Matching - Take advantage of the dynamics of achievement, risk taking, power, and affiliation. • Modeling - First of all, "be what you want them to do!" Other strategies include guest speakers, videos, and having the learners who finish their work first to serve as tutors. • Choice - Allow the learners to use different methods to pursue their work or allowing s choice in how they organize it. • Help students understand their likelihood for success. If they feel they cannot meet the objectives or that the cost (time or effort) is too high, their motivation will decrease. • Provide objectives and prerequisites - Help students estimate the probability of success by presenting performance requirements and evaluation criteria. Ensure the learners are aware of performance requirements and evaluative criteria. • Allow for success that is meaningful. • Grow the Learners - Allow for small steps of growth during the learning process. • Feedback - Provide feedback and support internal attributions for success. • Learner Control - Learners should feel some degree of control over their learning and assessment. They should believe that their success is a direct result of the amount of effort they have put forth. • Learning must be rewarding or satisfying in some way, whether it is from a sense of achievement praise from a higher-up, or mere entertainment. • Make the learner feel as though the skill is useful or beneficial by providing opportunities to use newly acquired knowledge in a real setting. • Provide feedback and reinforcement. When learners appreciate the results, they will be motivated to learn. Satisfaction is based upon motivation, which can be intrinsic or extrinsic. • Do not patronize the learner by over-rewarding easy tasks. For more information, we recommend: John Keller's book: Motivational Design for Learning and Performance: The ARCS Model Approach. Keller's book explains in detail the ARCS model. Separate chapters cover each component of the model and offer strategies for promoting each one in learners. Plenty of real-world examples and ready-to-use worksheets. The methods are applied to both traditional and alternative settings, including gifted classes, K12, self-directed learning, and corporate training. Citation: ARCS Model of Motivational Design Theories (Keller). (2016, March 05). Retrieved from: http://www.learning-theories.com/kellers-arcs-model-of-motivational-design.html Emotional Intelligence (EQ) is defined as the ability to identify, assess, and control one's own emotions, the emotions of others, and that of groups. Originators: Many, including Howard Gardner (1983) and Daniel Goleman (1995), in a popular 1995 book entitled Emotional Intelligence and his recent book. Emotional Intelligence: Why It Can Matter More than IQ. Several other models and definitions have also been proposed. Key Terms: conceptual elaboration sequence, theoretical elaboration sequence, simplifying conditions sequence Emotional Intelligence (EQ) In the 1900s, even though traditional definitions of intelligence emphasized cognitive aspects such as memory and problem-solving, several influential researchers in the intelligence field of study had begun to recognize the importance of going beyond traditional types of intelligence (IQ). As early as 1920, for instance. E.L. Thorndike described "social intelligence" as the skill of understanding and managing others. Howard Gardner in 1983 described the idea of multiple intelligences, in which interpersonal intelligence (the capacity to understand the intentions, motivations and desires of other people) and intrapersonal intelligence (the capacity to understand oneself, to appreciate one's feelings, fears and motivations) helped explain performance outcomes. The first use of the term "emotional intelligence" is often attributed to A Study of Emotion: Developing Emotional Intelligence from 1985, by Wayne Payne. However, prior to this, the term "emotional intelligence" had appeared in Leuner (1966). Stanley Greenspan (1989) also put forward an El model, followed by Salovey and Mayer (1990), and Daniel Goleman (1995). A distinction between emotional intelligence as a trait and emotional intelligence as an ability was introduced in 2000. Daniel Goleman's model (1998) focuses on El as a wide array of competencies and skills that drive leadership performance, and consists of five areas: 1. Self-awareness - knowing one's emotions, strengths, weaknesses, drives, values and goals and recognize their impact on others while using gut feelings to guide decisions. 2. Self-regulation - managing or redirecting one's disruptive emotions and impulses and adapting to changing circumstances. 3. Social skill - managing other's emotions to move people in the desired direction 4. Empathy - recognizing, understanding, and considering other people's feelings especially when making decisions 5. Motivation - motivating oneself and being driven to achieve for the sake of achievement. To Golman, emotional competencies are not innate talents, but rather learned capabilities that must be worked on and can be developed to achieve outstanding performance. Goleman believes that individuals are born with a general emotional intelligence that determines their potential for learning emotional competencies. Emotional Intelligence is not always widely accepted in the research community. Goleman's model of El, for instance, has been criticized in the research literature as being merely "pop psychology." However. El is still considered by many to be a useful framework especially for businesses. For more information, we recommend the following books: Goleman's book: Emotional Intelligence: Why It Can Matter More Than IQ. A well-written book by a former writer for the New York Times. The book explains how the rational and emotional work together to shape intelligence, citing neuroscience and psychology of the brain. Goleman explains why IQ is not the sole predictor of success; furthermore, he demonstrates how emotional intelligence can impact important life outcomes. A fascinating read! Bradberry, Greaves and Lencioni's book: Emotional Intelligence 2.0. A book that actually gives strategies for how to increase your emotional intelligence (not just explaining what emotional intelligence is). Helps readers increase four emotional intelligence skills: self-awareness, self-management, social awareness, and relationship management. Gives access to an online test that informs which strategies will increase your EQ the most. You are welcome to share or cite this summary article. Citation: Emotional Intelligence (Goleman). (2016, March 05). Retrieved from: http://www.learning-theories.com/emotional-intelligence-goleman.html Maslow's Hierarchy of Needs (often represented as a pyramid with five levels of needs) is a motivational theory in psychology that argues that while people aim to meet basic needs, they seek to meet successively higher needs in the form of a pyramid. Originator: Abraham Maslow in 1943. Key terms: deficiency needs, growth needs, physiological, safety, belongingness, esteem, self-actualization Maslow's Hierarchy of Needs Abraham H. Maslow felt as though conditioning theories did not adequately capture the complexity of human behaviour. In a 1943 paper called A Theory of Human Motivation. Maslow presented the idea that human actions are directed toward goal attainment. Any given behaviour could satisfy several functions at the same time; for instance, going to a bar could satisfy one's needs for self-esteem and for social interaction. Maslow's Hierarchy of Needs has often been represented in a hierarchical pyramid with five levels. The four levels (lower- order needs) are considered physiological needs, while the top level of the pyramid is considered growth needs. The lower level needs must be satisfied before higher-order needs can influence behaviour. The levels are as follows (see pyramid in Figure 1 below). • Self-actualization - includes morality, creativity, problem solving, etc. • Esteem - includes confidence, self-esteem, achievement, respect, etc. • Belongingness - includes love, friendship, intimacy, family, etc. • Safety - includes security of environment, employment, resources, health, property, etc. • Physiological - includes air, food, water, sex, sleep, other factors towards homeostasis, etc. The first four levels are considered deficiency or deprivation needs ("D-needs") in that their lack of satisfaction causes a deficiency that motivates people to meet these needs. Physiological needs, the lowest level on the hierarchy, include necessities such as air, food, and water. These tend to be satisfied for most people, but they become predominant when unmet. During emergencies, safety needs such as health and security rise to the forefront. Once these two levels are met, belongingness needs, such as obtaining love and intimate relationships or close friendships, become important. The next level, esteem needs, include the need for recognition from others, confidence, achievement, and self-esteem. The highest level is self-actualization, or the self-fulfilment. Behaviour in this case is not driven or motivated by deficiencies but rather one's desire for personal growth and the need to become all the things that a person is capable of becoming (Maslow, 1970). While a useful guide for generally understanding why students behave the way that they do and in determining how learning may be affected by physiological or safety deficiencies, Maslow’s Hierarchy of Needs has its share of criticisms. Some critics have noted vagueness in what is considered a "deficiency"; what is a deficiency for one is not necessarily a deficiency for another. Secondly, there seem to be various exceptions that frequently occur. For example, some people often risk their own safety to rescue others from danger. For more information about Maslow's Hierarchy of Needs, see: • Maslow's book: Hierarchy of Needs: A Theory of Human Motivation. Maslow's classic publication — perhaps essential reading for psychology students, educators and professionals. Maslow's book: Toward a Psychology of Being. Human flourishing — a useful book that helps you understand reaching self-actualization (sometimes called "flow" or "positive psychology." One of Maslow's best. Citation: Maslow's Hierarchy of Needs. (2016, March 05). Retrieved from: http://www.learning-theories.com/maslows-hierarchy-of-needs.html Self-Determination Theory is a theory of motivation and personality that addresses three universal, innate and psychological needs: competence, autonomy, and psychological relatedness. Originators: Edward L. Deci and Richard M. Ryan, psychologists at the University of Rochester. Key Terms: motivation, competence, autonomy, relatedness Self-Determination Theory (Deci and Ryan) Self-Determination Theory (SDT) is an important theory of motivation that addresses issues of extrinsic and intrinsic motivation. People have innate psychological needs: If these universal needs are met the theory argues that people will function and grow optimally. To actualize their inherent potential, the social environment needs to nurture these needs. Seek to control the outcome and experience mastery. Is the universal want to interact, be connected to, and experience caring for others. Is the universal urge to be causal agents of one's own life and act in harmony with one's integrated self; however, Deci and Vansteenkiste note this does not mean to be independent of others Motivation has often been grouped into two main types: extrinsic and intrinsic. With extrinsic motivation, a person tends to do a task or activity mainly because doing so will yield some kind of reward or benefit upon completion. Intrinsic motivation, in contrast, is characterized by doing something purely because of enjoyment or fun. Deci, Lens and Vansteenkiste (2006) conducted a study that demonstrated intrinsic goal framing (compared to extrinsic goal framing and no-goal framing) produced deeper engagement in learning activities, better conceptual learning, and higher persistence at learning activities. For more information, we recommend the following additional reading: Edward Deci's Book: Why We Do What We Do: Understanding Self-Motivation. Extremely interesting book, with a strong basis in empirical research. Even so, the book is very easy to read, with several case studies that a layman can easily understand. Highly recommended. The Oxford Handbook of Work Engagement Motivation, and Self-Determination Theory (Oxford Library of Psychology). This handbook brings together self-determination theory experts and organizational psychology experts to talk about past and future applications of the theory to the field of organizational psychology. Topics include: how to bring about commitment engagement and passion in the workplace; managing stress, health, emotions and violence at work; etc. Daniel Pink's book: Drive: The Surprising Truth About What Motivates Us. An extremely popular book that describes three elements to intrinsic motivation: autonomy, mastery, and purpose. Also includes a Toolkit section with strategies for individuals, companies, tips on compensation, suggestions for education, etc. Citation: Self-Determination Theory (Deci and Ryan). (2016, March 05). Retrieved from: http://www.learning-theories.com/self-determination-theory-deci-and-ryan.html The cognitivist paradigm essentially argues that the "black box" of the mind should be opened and understood. The learner is viewed as an information processor (like a computer). Originators and important contributors: Merrill - Component Display Theory (CDT), Reigeluth (Elaboration Theory), Gagne, Briggs, Wager, Bruner (moving toward cognitive constructivism), Schank (scripts), Scandura (structural learning) Keywords: Schema, schemata, information processing, symbol manipulation, information mapping, mental models The cognitivist revolution replaced behaviourism in 1960s as the dominant paradigm. Cognitivism focuses on the inner mental activities - opening the "black box" of the human mind is valuable and necessary for understanding how people learn. Mental processes such as thinking, memory, knowing, and problem-solving need to be explored. Knowledge can be seen as schema or symbolic mental constructions. Learning is defined as change in a learner's schemata. A response to behaviourism, people are not "programmed animals" that merely respond to environmental stimuli; people are rational beings that require active participation in order to learn, and whose actions are a consequence of thinking. Changes in behaviour are observed, but only as an indication of what is occurring in the learner's head. Cognitivism uses the metaphor of the mind as computer: information comes in, is being processed, and leads to certain outcomes. Citation: Cognitivism. (2016, March 05). Retrieved from: http://www.learning-theories.com/cognitivism.html Examples of Cognitivist Approaches: Cognitive Load Theory: Summary: A theory that focuses the load on working memory during instruction. Originators and proponents: John Sweller Keywords: cognitive load theory, working memory, multimedia learning Cognitive Load Theory of Multimedia Learning (Sweller) John Swelter's paper, "Implications of Cognitive Load Theory for Multimedia Learning" describes the human cognitive architecture, and the need to apply sound instructional design principles based on our knowledge of the brain and memory. Sweller first describes the different types of memory, and how both are interrelated, because schemas held in long-term memory, acting as a "central executive", directly affect the manner in which information is synthesized in working memory. Sweller then explains that in the absence of schemas, instructional guidance must provide a substitute for learners to develop either own schemas. Sweller discusses, in his view, three types of cognitive load: · extraneous cognitive load · intrinsic cognitive load · germane cognitive load Intrinsic cognitive load First described by Chandler and Sweller, intrinsic cognitive load is the idea that all instruction has an inherent difficulty associated with it (for instance, calculating 5+5). This inherent difficulty may not be altered by an instructor. However, many schemas may be broken into individual "subschemas" and taught in isolation, to be later brought back together and described as a combined whole. Extraneous cognitive load Extraneous cognitive load, by contrast, is under the control of instructional designers. This form of cognitive load is generated by the manner in which information is presented to learners (i.e., the design). To illustrate an example of extraneous cognitive load, assume there are at least two possible ways to describe a geometric shape like a triangle. An instructor could describe a triangle in a verbally, but to show a diagram of a triangle is much better because the learner does not have to deal with extraneous, unnecessary information. Germane cognitive load Germane load is a third kind of cognitive load which is encouraged to be promoted. Germane load is the load dedicated to the processing, construction and automation of schemas. While intrinsic load is generally thought to be immutable, instructional designers can manipulate extraneous and germane load. It is suggested that they limit extraneous load and promote germane load. Extraneous cognitive load and intrinsic cognitive load are not ideal; they result from inappropriate instructional designs and complexity of information. Germane cognitive load is coined as "effective' cognitive load, caused by successful schema construction. Each of the cognitive loads are additive, and instructional design's goal should be to reduce extraneous cognitive load to free up working memory. Throughout the article, Sweller also draws interesting comparisons between human cognition and evolutionary theory. For more information, see: John Swelter's book: Cognitive Load Theory (Explorations in the Learning Sciences, Instructional Systems and Performance Technologies). A bit expensive, but a useful book for academics, researchers, instructional designers, cognitive and educational psychologists, and those interested in cognition and or education technology. Ruth Clark’s book: Efficiency in Learning: Evidence-Based Guidelines to Manage Cognitive Load. One of the first books to contribute a full-length practical design guide to the application of CLT. Citation: Cognitive Load Theory of Multimedia Learning (Sweller). (2016, March 05). Retrieved from: http://www.learning-theories.com/cognitive-load-theory-of-multimedia-learning- sweller.html The term "Gestalt," comes from a German word that roughly means pattern or form. The main tenet of the Gestalt theory is that the whole is greater than the sum of its parts; learning is more than just invoking mechanical responses from learners. As with other learning theories, the Gestalt theory has laws of organization by which it must function. These organizational laws already exist in the make-up of the human mind and how perceptions are structured. Gestalt theorists propose that the experiences and perceptions of learners have a significant impact on the way that they learn. One aspect of Gestalt is phenomenology, which is the study of how people organize learning by looking at their lived experiences and consciousness. Learning happens best when the instruction is related to their real life experiences. The human brain has the ability to make a map of the stimuli caused by these life experiences. This process of mapping is called "isomorphism." The Gestalt theory of learning originated in Germany, being put forth by three German theorists who were inspired by the works and ideas of the man who gave the learning theory its name. Graf Christian von Ehrenfels was a learning theorist who took the holistic approach to learning by putting forth the idea that learning takes place as students were able to comprehend a concept in its entirety, rather than broken up into parts. Key Terms: holistic, mechanical response, phenomenology. Isomorphism, factor of closure, factor of proximity, trace factor, factor of similarity, figure ground effect Theorists: Graf Christian von Ehrenfels, Wertheimer, Kohler, Koffka, insight learning Gestalt Whenever the brain sees only part of a picture, the brain automatically attempts to create a complete picture. This is the first organizational law, called the "factor of closure," and it does not only apply to images, but it also applies to thoughts, feelings and sounds. Based upon Gestalt theory, the human brain maps elements of learning that are presented close to each other as a whole, instead of separate parts. This organizational law is called the "factor of proximity," and is usually seen in learning areas such as reading and music, where letters and words or musical notes make no sense when standing alone, but become a whole story or song when mapped together by the human brain. The next organizational law of the Gestalt theory is the "factor of similarity," which states that learning is facilitated when groups that are alike are linked together and contrasted with groups that present differing ideas. This form of Gestalt learning enables learners to develop and improve critical thinking skills. When observing things around us, it is normal for the eye to ignore space or holes and to see. instead, whole objects. This organizational law is called the "figure-ground effect." As new thoughts and ideas are learned the brain tends to make connections, or "traces," that are representative of the links that occur between conceptions and ideas, as well as images. This organizational law is called the "trace theory." The Gestalt theory placed its main emphasis on cognitive processes of a higher order, causing the learner to use higher problem solving skills. They must look at the concepts presented to them and search for the underlying similarities that link them together into a cohesive whole. In this way, learners are able to determine specific relationships amongst the ideas and perceptions presented. The Gestalt theory of learning purports the importance of presenting information or images that contain gaps and elements that don't exactly fit into the picture. This type of learning requires the learner to use critical thinking and problem solving skills. Rather than putting out answers by rote memory, the learner must examine and deliberate in order to find the answers they are seeking. When educators are presenting information to the students using the Gestalt theory of learning, they must ensure that their instructional strategies make use of the organizational laws presented earlier in this article. The Gestalt theory of learning came into the forefront of learning theories as a response to the Behaviorist theory. Other theories have evolved out of the original Gestalt learning theory, with different forms of the Gestalt theory taking shape. The field of Gestalt theories have come to be acknowledged as a cognitive-interactionist family of theories. The Gestalt theory purports that an individual is a whole person and the instructional strategies used to teach them will help to discover if there is anything that is mentally blocking them from learning certain new information. Teaching strategies are used to present problems as a whole and to attempt to remove any mental block from the learner so that new information can be stored. Citation: Gestalt Theory (von Ehrenfels). (2016, March 05). Retrieved from: http://www.learning-theories.com/gestalt-theory-von-ehrenfels.html Situated cognition is the theory that people's knowledge is embedded in the activity, context, and culture in which it was learned. It is also referred to as "situated learning." Originators & proponents: John Seely Brown, Allan Collins, Paul Duguid Keywords: activity, authentic domain activity, authentic learning, cognitive apprenticeship, content-specific learning, context culture, everyday learning, knowledge, legitimate peripheral participation, socio-cultural learning, social construction of knowledge, social interaction, teaching methods Situated cognition (Brown, Collins, & Duguid) Situated cognition is a theory which emphasizes that people's knowledge is constructed within and linked to the activity, context and culture in which it was learned. Learning is social and not isolated, as people learn while interacting with each other through shared activities and through language, as they discuss, share knowledge, and problem-solve during these tasks. For example, while language learners can study a dictionary to increase their vocabulary, this often solitary work only teaches basic parts of learning a language; when language learners talk with someone who is a native speaker of the language, they will learn important aspects of how these words are used in the native speaker's home culture and how the words are used in everyday social interactions. Cognitive apprenticeship is an important aspect of situated cognition. During this social interaction between a novice learner and an expert, important skills, interactions, and experiences are shared. The novice learns from the expert as an apprentice, and the expert often passes down methods and traditions which the apprentice can learn only from the expert and which are authentic learning. This is a form of socio-cultural learning. The expert is a practitioner of the skill and tradition, meaning that they use and practice them regularly in the everyday life. The expert scaffolds the novice's learning. This theory has helped researchers understand more widely about how people learn because it has focused on what people learn in their everyday lives, which are authentic contexts for a variety of skills. In addition, it helps educators understand how to capitalize on knowledge and skills that their students may already possess in order to help them learn new content and skills. For more information, see: • Aydede, M., & Robbins, P. (Eds.). (2009). The Cambridge handbook of situated cognition. New York, NY: Cambridge University Press. • Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational researcher, 78(1), 32-42. You are welcome to share or cite this summary article. Citation: Situated Cognition (Brown, Collins, & Duguid). (2016, March 05). Retrieved from: http://www.learning-theories.com/situated-cognition-brown-collins-duguid.html Piaget's Stage Theory of Cognitive Development is a description of cognitive development as four distinct stages in children: sensorimotor, preoperational, concrete, and formal. Originator: Jean Piaget (1896-1980) Key Terms: Sensorimotor, preoperational, concrete, formal, accommodation, assimilation. Piaget's Stage Theory of Cognitive Development Swiss biologist and psychologist Jean Piaget (1896-1980) observed his children (and their process of making sense of the world around them) and eventually developed a four-stage model of how the mind processes new information encountered. He posited that children progress through 4 stages and that they all do so in the same order. These four stages are: • Sensorimotor stage (Birth to 2 years old). The infant builds an understanding of himself or herself and reality (and how things work) through interactions with the environment. It is able to differentiate between itself and other objects. Learning takes place via assimilation (the organization of information and absorbing it into existing schema) and accommodation (when an object cannot be assimilated and the schemata have to be modified to include the object. • Preoperational stage (ages 2 to 4). The child is not yet able to conceptualize abstractly and needs concrete physical situations. Objects are classified in simple ways, especially by important features. • Concrete operations (ages 7 to 11). As physical experience accumulates, accomodation is increased. The child begins to think abstractly and conceptualize, creating logical structures that explain his or her physical experiences. • Formal operations (beginning at ages 11 to 15). Cognition reaches its final form. By this stage, the person no longer requires concrete objects to make rational judgements. He or she is capable of deductive and hypothetical reasoning. His or her ability for abstract thinking is very similar to an adult. Mooney's book: Theories of Childhood, Second Edition: An Introduction to Dewey, Montessori, Erikson, Piaget & Vygotsky. Clear, straightforward introductions to foundational theories including Piaget Dewey and Vygotsky. Includes discussion questions and insights on how the theory impacts teaching young children today. Citation: Stage Theory of Cognitive Development (Piaget). (2016, March 05). Retrieved from: http://www.learning-theories.com/piagets-stage-theory-of-cognitive-development.html Theory of Mind, Empathy, Mindblindness Theory of mind refers to the ability to perceive the unique perspective of others and its influence on their behaviour - that is, other people have unique thoughts, plans, and points of view that are different than yours. Originators and key contributors: • Jean Piaget (1896-1980), a Swiss psychologist described the inability of young children to perceive others' points of view due to 'egocentrism.' • David Premack and Guy Woodruff developed the term Theory of Mind (1978) as applied to their studies on chimpanzees. • Josef Perner and Heinz Wimmer (1983) extended Theory of Mind to the study of child development Keywords: Social cognition, child development, false-belief. Autism spectrum disorders, mindblindness Theory of Mind Theory of mind (ToM) is defined as an implicit understanding of the individual mental states of others, and their influence upon behaviour. It is the understanding that others thoughts and feelings are unique and often different to one's own personal thoughts and feelings, and that both may differ from actual reality. The ability to gasp ToM implicates various aspects of social interaction such as cooperation, lying, following directions, and feeling empathy. Lacking adequate ToM will cause difficulty in understanding and predicting the behavior of others. False-Belief tasks are the classic strategy used to test the presence of ToM. False- belief refers to the recognition of the fact that people often make mistakes. Gaining an understanding that one may hold of an incorrect belief is a crucial step in ToM development. One variation of this task utilizes a puppet that places a piece of chocolate in a cupboard before leaving. The experimenter then hides the piece of chocolate elsewhere. At this point the child is asked where the puppet will look first for the chocolate when he returns. A child who has not yet grasped ToM (usually children younger than four years of age) will not be able to separate between his knowledge and the puppet's knowledge, and therefore will falsely conclude that the puppet will look in its new location. An older child with developed ToM will correctly assume that the puppet will search for the object in its original position. Infancy and early childhood are characterized by an inability to consider another's point of view, a feature which Piaget termed egocentrism. For example, when asked what to buy their mother for her birthday, young children will enthusiastically respond with their own favourite toy. They are unable to fathom that she might desire something different from that which they desire. This can also be seen in aggressive behaviour towards others, as they falsely assume that what is fun for them is also fun for the object of their aggression. That being said, during infancy and early childhood various behavior are learned, which form the basis on which future ToM will develop. Behaviors that begin to take into account others' point of view include mimicking, joint attention (6-12 months), and pointing (12-18 months). Toddlers become aware of other's emotions and are able to name those emotions even if they do not feel them. In addition, they begin to comprehend the unique likes and dislikes of others, and separate to some degree between imagination and reality, as seen in pretend play which appears at this point. The toddler additionally understands the emotional consequences of actions, (i.e. if I throw my spoon, mother will be angry) and discern between intentional behaviour and accidents. Although these skills exist by the age of three, the child is still ambivalent regarding the exact nature of the other's perception. According to Piaget this is due to the fact that at this point, thought processes are predominated by'egocentrism'. The emergence of true ToM occurs at around 4-5 years, as executive functioning improves. At this point more complex perception of the unique desires of others is expected (e.g. although I want the car, she may want something else) alongside the possibility of hidden feelings. Children successfully complete false-belief tasks, while grasping the existence of several truths regarding a single idea. They are more adept at relating their own experiences to others, by taking into account that more information should be given if the person was not there. ToM continues to develop, with elementary school age children beginning to ponder what others think about themselves, and utilizing ToM based language, such as deceit, sarcasm, and metaphors. ToM impairment refers to the state in which ToM does not develop as expected. This state may result from a neurological, cognitive, or emotional deficit. This impairment exists most prominently in Autism spectrum disorders (ASD), and serves as one of the primary characteristics. Individuals with ASD who present high cognitive abilities and verbal knowledge, still display difficulties in passing ToM tasks. This impairment of ToM in ASD is also termed "mindblindness." They are often unable to perceive social cues and thus have difficulty ascertaining others' motives and intentions. People often mistakenly assume that these individuals do not care or empathize with others, when in reality there is a true lack of understanding. Therefore, they often experience social difficulties. These difficulties cover a vast expanse of social functioning, such as relaying a story to others, pretend play, explaining their behaviour to others, comprehending emotions, engaging in conversations, predicting the behaviour and feelings of others, understanding others’ points of view, and generally joining in on social conventions. Thus, children presenting ToM impairment will need directed interventions in order to be more at ease during social interactions. For further information, please see: • Theory of Mind: How Children Understand Others' Thoughts and Feelings (International Texts in Developmental Psychology) Citation: Theory of Mind, Empathy, Mindblindness (Premack, Woodruff, Perner, Wimmer). (2016, March 05). Retrieved from: http://www.learning-theories.com/theory-of-mind- empathy-mindblindness-premack-woodruff-perner-wimmer.html Behaviourism is a worldview that operates on a principle of "stimulus-response." All behaviour caused by external stimuli (operant conditioning). All behaviour can be explained without the need to consider internal mental states or consciousness. Originators and important contributors: John B. Watson, Ivan Pavlov, B.F. Skinner, E. L. Thorndike (connectionism), Bandura, Tolman (moving toward cognitivism) Keywords: Classical conditioning (Pavlov), Operant conditioning (Skinner), Stimulus-response (S-R) Behaviourism is a worldview that assumes a learner is essentially passive, responding to environmental stimuli. The learner starts off as a clean slate (i.e. tabula rasa) and behaviour is shaped through positive reinforcement or negative reinforcement. Both positive reinforcement and negative reinforcement increase the probability that the antecedent behaviour will happen again. In contrast, punishment (both positive and negative) decreases the likelihood that the antecedent behaviour will happen again. Positive indicates the application of a stimulus; Negative indicates the withholding of a stimulus. Learning is therefore defined as a change in behaviour in the learner. Lots of (early) behaviourist work was done with animals (e.g. Pavlov's dogs) and generalized to humans. Behaviourism precedes the cognitivist worldview. It rejects structuralism and is an extension of Logical Positivism. Developed by BF Skinner, Radical Behaviourism describes a particular school that emerged during the reign of behaviourism. It is distinct from other schools of behaviourism, with major differences in the acceptance of mediating structures, the role of emotions, etc. Citation: Behaviorism. (2016, March 05). Retrieved from: http://www.learning-theories.com/behaviorism.html LINKS TO OTHER WEBSITES ABOUT LEARNING THEORY: Overview of Learning Theories Although there are many different approaches to learning, there are three basic types of learning theory: behaviourist, cognitive constructivist, and social constructivist. This section provides a brief introduction to each type of learning theory. The theories are treated in four parts: a short historical introduction, a discussion of the view of knowledge presupposed by the theory, an account of how the theory treats learning and student motivation, and finally, an overview of some of the instructional methods promoted by the theory is presented. View of knowledge Knowledge is a repertoire of behavioural responses to environmental stimuli. Knowledge systems of cognitive structures are actively constructed by learners based on pre-existing cognitive structures. Knowledge is constructed within social contexts through interactions with a knowledge community. View of learning Passive absorption of a predefined body of knowledge by the learner. Promoted by repetition and positive reinforcement. Active assimilation and accommodation of new information to existing cognitive structures. Discovery by learners. Integration of students into a knowledge community. Collaborative assimilation and accommodation of new information. View of motivation Extrinsic, involving positive and negative reinforcement. Intrinsic; learners set their own goals and motivate themselves to learn. Intrinsic and extrinsic. Learning goals and motives are determined both by learners and extrinsic rewards provided by the knowledge community. Implications for Teaching Correct behavioural responses are transmitted by the teacher and absorbed by the students. The teacher facilitates learning by providing an environment that promotes discovery and assimilation/accommodation. Collaborative learning is facilitated and guided by the teacher. Group work. Communication and Learning Enterprises Ltd Ulverston Business Centre 25 New Market Street Registered in England and Wales Company Number 6588282 Tel: 01229 585173
According to the United Nations, 3.5 billion people currently live in cities, and with exponential urbanization in developing countries, the world’s urban population is expected to reach 5 billion by 2030. Cities account for roughly three-quarters of the world’s energy consumption and carbon emissions. More than 800 million people live in slums, and more than half of all urban dwellers in the world are breathing air polluted at levels at least 2.5 times higher than the safety standard. Living in cities also has positive aspects. There are generally more job opportunities, and better infrastructure and educational and healthcare opportunities compared to rural communities. Public transportation is usually available, enabling many people to travel together, reducing fuel use and fuel costs. There is a diversity of cultures in cities, allowing people from various backgrounds to live together and learn from each other. Diverse efforts aim to address urban problems, including access to clean water and sanitation, affordable and healthy housing, energy-efficiency transportation alternatives, and equitable food sources. Building community resilience in cities seeks to minimize potential human and economic losses from future environmental and social challenges. The United Nations Sustainable Development Goal (Sustainable Cities and Communities) aims to address the challenges of urbanization and to ensure that cities are inclusive, green, safe, and managed sustainably, by: - Ensuring universal access to adequate, safe, sustainable, and affordable housing, transportation, and basic services in urban settings by 2030 - Improving cities’ waste management, air quality, urban planning, and infrastructure to reduce adverse environmental impact and improve resilience to disasters - Providing universal access to safe, inclusive, accessible, and green public spaces by 2030, especially for the most vulnerable groups (women, children, the elderly, and people with disabilities)
Thinking about the Indian Removal Act, at the National Archives Museum and National Museum of the American Indian “Our cause is your own. It is the cause of liberty and justice.” Principal Chief John Ross (Cherokee, 1790–1866), appearing before the U.S. Senate in 1836 to argue on behalf of the Cherokee Council against ratification of the Treaty of New Echota, ceding Cherokee lands to the United States This spring, I visited the National Archives Museum in Washington, D.C., to see the Indian Removal Act, on display in the Archives’ Landmark Document Case. Signed by President Andrew Jackson on May 28, 1830, the Removal Act, gave the president the legal authority to remove Native people by force from their homelands east of the Mississippi to lands west of the Mississippi. It became for American Indians one of the most detrimental pieces of legislation in U.S. history. Under the Removal Act, the military forcibly relocated approximately 50,000 American Indians to Indian Territory, within the boundaries of the present-day state of Oklahoma. At the National Museum of the American Indian, we address the importance of the Removal Act in two major exhibitions—Nation to Nation, which opened in September 2014 and will be on view through 2021, and Americans, opening October 26 of this year and on view through fall 2027. “Many of these helpless people did not have blankets and many of them had been driven from home barefooted. . . . And I have known as many as twenty-two of them to die in one night of pneumonia due to ill treatment, cold, and exposure. ” Private John G. Burnett (1810–unknown), Captain Abraham McClellan’s Company, 2nd Regiment, 2nd Brigade, Mounted Volunteer Militia, account of the removal of the Cherokee, from a letter to his children written in 1890 Many Americans, and many people beyond the United States, know the story of removal—or part of the story. In the late 1830s, more than 20,000 Cherokee men, women, and children were removed from their homelands. Approximately one-fourth of these people died along the Trail of Tears—bayoneted, frozen to death, starved, or pushed beyond exhaustion. Less well known, perhaps, is that hundreds of other tribes shed tears as well as they were forced to leave their homes to make room for non-Indian settlement and ownership of their land. Through American expansion, every tribe lost land its people originally called home. “They were not allowed to take any of their household stuff, but were compelled to leave as they were, with only the clothes which they had on. ” —Wahnenauhi (Lucy Lowrey Hoyt Keys, Cherokee, 1831–1912), account of the Cherokee removal written in 1889, published by the Smithsonian Bureau of American Ethnology in Bulletin 196, Anthropological Papers, No. 77 The museum’s exhibitions look at the Removal Act from the broader perspective of events at the time it was enacted and during the nearly two centuries since. In the companion book to Nation to Nation, Robert N. Clinton, Foundation Professor of Law at the Sandra Day O’Connor School of Law at Arizona State University, describes the growing sense of national strength that allowed the federal government to move away from conducting negotiations with Indian nations as a sort of diplomacy—based on transnational law, mutual interests, and tribal sovereignty—and toward the direct pursuit of its one-sided goals: The War of 1812 eliminated the possibility of Indian alliances with Britain, which had posed a threat to the stability and security of the United States. Thereafter . . . the bargaining power in treaty discussions shifted greatly to the United States, and policy was increasingly dictated by the federal government. . . . After a decade of treaty negotiations on the subject, the southeastern states provoked a controversy over the continued presence of the Cherokee, Chickasaw, Muskogee (Creek), Choctaw, and Seminole nations on lands within state borders. Congress decided to chart the policy unilaterally by adopting the Removal Act of 1830. Nation to Nation also explores the place of the Removal Act in U.S. legal history. The exhibition shows how advocates and Native and non-Native opponents of removal battled in Congress and the courts—all the way to the Supreme Court—at the same time tribal leaders were working to ensure the survival of their people. Americans, which will explore Indians and the development of America's national consciousness through four iconic events—Thanksgiving, the life of Pocahontas, the Trail of Tears, and the Battle of Little Bighorn—widens the museum’s perspective on the Removal Act even more. In developing the themes of the new exhibition, lead curator Paul Chaat Smith (Comanche) and co-curator Cécile R. Ganteaume wrote: Democracy at the Crossroads—the section of Americans about the Trail of Tears—explores the contemporary relevance of removal and why it is still embedded in 21st-century American life. We focus on crucial elements of the history that usually do not receive the attention they deserve: A vigorous national debate over removal consumed the United States before passage of the Indian Removal Act. With the eyes of the Western world upon them, members of Congress cloaked the Removal Act in humanitarian language. The actual removal of Native nations from the South across the Mississippi was a massive national project that required the full force of the federal bureaucracy to accomplish. Finally, it is due to efforts of young Cherokees in the early 20th century that the expression “trail of tears” has come to be known throughout the country, if not the world, to represent a gross miscarriage of justice. In the central space that links the four iconic events in Americans, visitors will find themselves surrounded by photographs and commercial art. The idea is to show how images of Indians—and Native names and words from Native languages—are and have always been everywhere around us in the United States. Once we look, we can see them as national symbols on monuments, coins, and stamps; in the marketing of just about anything you can think of; in the Defense Department’s naming conventions for weapons; and as part of pop culture. The reality of images and references to Indians everywhere is illustrated, for the time being, by the 1948 Indian Chief motorcycle on view in the museum’s atrium. I confess that as I stood before the original Removal Act at the National Archives, it was hard for me to reconcile the events it set in motion with the motorcycle’s very American celebration of freedom. The curators of Americans hope, however, that the new exhibition will encourage visitors to be part of a new conversation among Natives and non-Natives about the place Indians continue to hold in our understanding of America. It’s an important conversation, and I’m committed to being part of it.
Developing and sustaining foundational language skills: listening, speaking, reading, writing, and thinking--beginning reading and writing. The student develops word structure knowledge through phonological awareness, print concepts, phonics, and morphology to communicate, decode, and spell. Knowledge and Skills Statement A knowledge and skills statement is a broad statement of what students must know and be able to do. It generally begins with a learning strand and ends with the phrase “The student is expected to:” Knowledge and skills statements always include related student expectations. Ask students to segment words into syllables. They can orally demonstrate a break between syllables or identify the syllables through an action such as clapping, using fingers, or moving counters. You are going to listen to words and tell me the syllables you hear. For example, the syllables you hear in the word butter are /but/-/ter/. Can you tell me the sounds you hear in these words? Glossary Support for ELA.K.2.A.vi Yopp, H., & Yopp, R. (2000). Supporting Phonemic Awareness Development in the Classroom. The Reading Teacher, 54(2), 130–143. Retrieved from http://www.jstor.org/stable/20204888 Summary: Yopp and Yopp describe phonemic awareness and provide ideas for activities that focus on rhyme, syllable manipulation, onset-rime manipulation, and phoneme manipulation.
Some pupils think that friction arises only when there is motion. Furthermore, few pupils understand that the size of the friction force matches the externally applied force, up to a limit (for any given pair of surfaces). The following worksheets may help to identify whether students hold this particular misconception. For more information, see the University of York EPSE website. Resource that Address This Source - SPT/ Fo02PN08 This resource gives an accessible explanation of friction.View Resource How can you tell if there is friction? (5-11) Source - SPT/ Mf03TL02 An activity to explore that friction exists between all surfaces, moving and tending to move - slip and grip.View Resource Friction between solid surfaces (11-16) Source - Practical physics/ Force and motion/ Friction, turning and other effects Part 4 is particularly relevantView Resource The following studies have documented this misconception:
This post was written 8 years ago. For my current approach to this topic, which uses transformation equations, please follow this link: Function Transformations: Dilation This post explores one type of function transformation: “dilation”. If you are not familiar with “translation”, which is a simpler type of transformation, you may wish to read Function Translations: How to recognize and analyze them first. A function has been “dilated” (note the spelling… it is not spelled or pronounced “dialated”) when it has been stretched away from an axis or compressed toward an axis. Imagine a graph that has been drawn on elastic graph paper, and fastened to a solid surface along one of the axes. Now grasp the elastic paper with both hands, one hand on each side of the axis that is fixed to the surface, and pull both sides of the paper away from the axis. Doing so “dilates” the graph, causing all points to move away from the axis to a multiple of their original distance from the axis. As an example of this, consider the following graph: The graph above shows a function before and after a vertical dilation. The coordinates of two points on the solid line are shown, as are the coordinates of the two corresponding points on the dashed line, to help you verify that the dashed line is exactly twice as far from the x-axis as the same color point on the solid line. The origin is a point shared by both lines, and it is useful to note that the dashed line is still “twice as far from the x-axis” at the origin, because . Any point that satisfies a function definition and lies on the x-axis will not move when the function is dilated vertically. There are two ways we can describe the relationship between the two functions graphed above. Either: - the solid line has been “dilated vertically by a factor of 2” to produce the dashed line, or - the dashed line has been “dilated vertically by a factor of 0.5” to produce the solid line. Both statements describe the graph accurately. However, in general the function definition which is simplest (in algebraic terms) will be considered the “parent” function, with the more complex-looking definition being described as a dilation of the simpler function. (graphed as the dashed curve below), is easier to analyze if you perceive it as related to a simpler “parent” function: (graphed as the solid curve below) which has been both dilated and translated: f(x) has been dilated vertically by a factor of 3, then translated vertically by +5 and horizontally by +1 to produce g(x). The blue point at the origin, which is the vertex of the solid parabola, had its y-coordinate (0) multiplied by three then had five added to it: (0) x 3 + 5 = 5 It was then shifted one unit to the right, causing its x-coordinate to change from 0 to 1. So, the “parent” vertex that was at the origin is located at (1, 5) in the transformed function. The green point on the solid parabola (2, 4) also had its y-coordinate (4) multiplied by three and had five added to it: 4 x 3 + 5 = 17 It was then shifted one unit to the right, just as the vertex was, and that point (3 ,17) satisfies the equation of the dashed parabola, g(x). Visualizing functions as translations and dilations of a simpler “parent function” can make complex-looking equations much easier to interpret. Note that a negative dilation factor causes both a dilation and a reflection about the axis to occur. All points that were on one side of the axis of dilation are reflected to the other side of the axis by a negative dilation factor. Consider the solid parabola below, which represents the function: If it is translated vertically by +4, so that its vertex moves from (0,0) to (0,4), the equation becomes: which is graphed by the dashed parabola below. What happens to graph of the dashed parabola f(x) if every term in its equation is multiplied by three? We’ll refer to the result of this multiplication as g(x): Note that we could easily write this second function in terms of the first: By defining g(x) this way, we are explicitly stating that every y-coordinate produced by g(x) will be three times the corresponding y-coordinate on f(x). In other words, g(x) is f(x) dilated vertically by a factor of three. Every point on the graph of g(x) below (the upper, dotted, parabola) is three times farther away from the x-axis than the corresponding point on f(x): f(x) passes through the point (2,8). Since we are examining vertical dilations, let’s keep the x-coordinate the same and ask “What will g(2) be?” The original f(x) will be stretched vertically by a factor of three vertically everywhere, including at x=2, so (2,8) becomes (2,24). You can verify for yourself that (2,24) satisfies the above equation for g(x). This process works for any function. Any time the result of a parent function is multiplied by a value, the parent function is being vertically dilated. If f(x) is the parent function, then dilates f(x) vertically by a factor of “a”. Let’s apply this idea to a trigonometric function: Based on the explanation in the previous paragraph, we can conclude that represents a vertical dilation by -5 of If we apply this approach to another type function you can see that we can analyze it the same way: dilated vertically by a factor of k becomes: Applying this approach to an even more complex situation: The parent function in this case is Note that every instance of “x” in f(x) has had (x-1) substituted for it, which translates f(x) horizontally by +1. Then this result was multiplied by 3, causing a vertical dilation by a factor of 3: The original vertical translation and y-intercept of +1 ( the constant term in the definition of f(x) ) is also affected by the vertical dilation, and becomes +3 in g(x)… three times the distance from the x-axis that it was originally. One last example: The parent function has been dilated vertically by a factor of +2, translated horizontally by +7, and then translated vertically by +3 (after being dilated vertically), to produce g(x): Let’s return to the graph of: What happens to this graph if the equation is changed by multiplying every “x” in the equation by three: Once again, we can describe g(x) more compactly if we do so using f(x), however this time the dilation factor is multiplied by the function’s “input variable” instead of its “result” (as was done to produce a vertical dilation): Note that f(x) passes through the point (3,13). Since we are thinking about horizontal dilations, let’s ask “What value must ‘x’ have if g(x) is to produce this same output of 13?” This shows that the point (3,13) on the graph of f(x) corresponds to the point (1,13) on g(x). Verify for yourself that the point (1,13) satisfies the equation for g(x): Since (3,13) moved to (1,13), multiplying every “x” in f(x) by 3 has compressed the graph horizontally, with each point being moved to one third of its previous distance from the y-axis. If multiplying the result of a function by a factor causes a vertical dilation by the same factor, why does multiplying the input variable by a factor cause a horizontal dilation by the reciprocal of that factor? To ask the question another way, if using a coefficient greater than one expands things vertically, why does it shrink things horizontally? This difference in effect seems counter-intuitive at first glance. The difference occurs because vertical dilations occur when we scale the output of a function, whereas horizontal dilations occur when we scale the input of a function. The “x” in the original f(x) became a “3x” in g(x), so g(x) reaches a given “input value” three times faster than f(x). “x” only has to be 1/3 as big in g(x) for the result of the equation to be the same as f(x). Therefore, all points on g(x) have been scaled to be 1/3 of the distance from the vertical axis that they were in f(x). This process works for any function. Anytime the input of the “parent function” is multiplied by a value, the parent function is being horizontally dilated. If is the parent function, then represents a horizontal dilation of the parent function by a factor of “1/a”. Apply this idea to a slightly more complex situation: represents a horizontal dilation by a factor of 1/5 (toward the vertical axis) of In other words, the period of f(x) is , and the period of g(x) is Horizontal dilations of a quadratic function look a bit more complex at first, until you become accustomed to the pattern you are looking for: represents a horizontal dilation by a factor of 2 (away from the vertical axis) of Note that every instance of “x” in the parent function must be changed to be for the new equation to represent a horizontal dilation of the parent by a factor of 2. Applying this approach to a fractional situation: represents a horizontal dilation by a factor of 1/k of What’s The Difference? In contemplating both vertical and horizontal dilations, you may have realized that the graphs of some functions, such as could be considered either a vertical dilation by a factor of 4 or a horizontal dilation by a factor of 1/2. It is interesting to note that both dilations, stretching it vertically or squeezing it horizontally, have the same end result for this function. Can this be true for other functions as well? Consider the following equivalent equations: This example demonstrates that some functions can transformed to the same end result by either a horizontal dilation, a vertical dilation, or a combination of both. In the example above, the following three sets of dilations and translations of the parent function produce the same graph: 1) Dilated horizontally by a factor of 1/6, then translated horizontally by +2. No vertical dilation. 2) Dilated horizontally by a factor of 1/3, then translated horizontally by +2. Dilated vertically by a factor of 4. 3) No horizontal dilation, translated horizontally by +2. Dilated vertically by a factor of 36. Note how the horizontal translations change as the horizontal dilations change. Since a horizontal dilation shrinks the entire graph towards the vertical axis, the graph’s horizontal translation shrinks by the same factor. As the original horizontal dilation factor of 1/6 in the example above is increased by a factor of 6 to be 1 (becoming converted into a vertical dilation factor of 36 in the process), the original horizontal translation of 12 shrinks by a factor of 6 to become 2. So which of all the above options is the “normal” way of describing this graph? Having a preferred way of describing it will make it more likely that different people will describe the graph in the same way… The “normal” way of describing a combination of dilations and translations is to convert all dilations into vertical dilations by manipulating the expression so that the independent variable has a coefficient of one: So this equation represents a vertical dilation by a factor of 36 and a horizontal translation of +2 of the equation If you were not interested in the vertical dilation, but only in the horizontal translation, you could solve the independent variable expression (before applying any exponent) for zero: which tells us that the “parent function” has been translated horizontally by +2 after all dilations have been carried out. Dilation About Lines Away From An Axis In some situations it will be useful to dilate a function relative to a horizontal or vertical line other than the axis. To achieve this, we need to: - Translate the graph so that the part of the graph that is to remain unchanged by the dilation is moved to the axis - Dilate the graph by the desire amount - Translate the dilated function back to its original location Suppose we wish to dilate a function f(x) vertically by a factor of 3 about the line y=2. The above steps produce the following for the function f(x): Translate f(x) down 2, so that the line about which we wish to dilate is moved onto the x-axis: Dilate the translated function vertically by a factor of 3: Now “undo” the original vertical translation by translating it back up 2: If you graph both f(x) and g(x) on the same graph, as shown above, you will note that the two graphs intersect one another at the line y=2, which is the line about which we dilated f(x). Those are the only two points on the graph of f(x) that remain unchanged by the dilation. This same process can be followed to create horizontal dilations about some vertical line: translate the function horizontally, then dilate it, then translate the result back to where it started. Want to Play? If you would like to play around with vertical dilations and see how they work, try any of the following Geogebra applets. The only one that lets you play with horizontal dilations is the last one (Sine Function): – Quadratic function in vertex form – Exponential function – Sine function You may also be interested in a topic based on the ideas in this post: – Using Corresponding Points to Determine Dilation Factors and Translation Amounts
We are all about education. Here we share some of our favorite educator resources on apple farming, biotechnology, and agriculture. High School. Introduces students to the relationships between chromosomes, genes, and DNA molecules. It also provides activities that clearly show how changes in the DNA of an organism, made by using either natural or scientific techniques, can cause changes. Available via National Agriculture in the Classroom. High School. This lesson provides students with a brief overview of biotechnology, equipping them with the ability to evaluate the social, environmental, and economic arguments for and against genetically modified crops. Available via National Agriculture in the Classroom. Middle Years – High School. Covers in depth the concepts of genetics including an introduction to human inheritance, genetic breeding, Punnett squares, the importance of genetic diversity, biotechnology, gene marker selection, and the use of biotechnology for sustainable agriculture. Available via California Foundation for Agriculture in the Classroom. Middle Years. Learn about apple genetics related to production through a hands-on activity exploring the characteristics of apple varieties. Students will apply their knowledge of heredity and genetics to discover how new varieties of apples are developed through cross-breeding techniques. Available via National Agriculture in the Classroom. Middle Years. Six lessons designed to encourage students to think critically about topics such as sustainability, genetically modified organisms (GMOs), and biodiversity. Students will explore information from a variety of sources and apply their knowledge through hands-on activities and engaging projects. Available via Good in Every Grain. Middle Years. Students will learn about two types of plant propagation – seed planting (sexual) and stem cuttings (asexual) and recognize the genetic differences in these processes, as well as the advantages and disadvantages of each method. Available via National Agriculture in the Classroom. Early Years. Students will explore heredity concepts by comparing observable traits of apples and onions, collecting data on traits of different apple varieties, and learning about apple production. Additional activities include hands-on methods for testing apple ripeness. Available via National Agriculture in the Classroom. Want to learn more about Arctic® apples? We offer a variety of content intended to help all ages better understand Arctic® apples and the apple industry in general. Find some of our top picks below. A look at how Arctic® apples were improved with their nonbrowning benefit using gene silencing. Links to further information are provided in text for those looking for an in-depth look.
Tyrannosaurus rex — T. rex for short — lived during the upper Cretaceous Period, 67 million to 65 million years ago, toward the end of the Mesozoic Era. The name tyrannosaurus rex means "king of the tyrant lizards.” The animal's length was about 40 feet (12 meters). Height was 15 to 20 feet (4.6 to 6 meters). Weight could top 9 tons (8,200 kilograms). T. rex ate mostly meat. Its favorite meals were probably herbivorous dinosaurs including Edmontosaurus and Triceratops. Tyrannosaurus had a massive 5-foot-long (1.5 meters) thick skull and its 4-foot-long (1.2 meters) jaw could easily crush bones. Serrated, conical teeth were most likely used to pierce and grip flesh. Its strong thighs and long, powerful tail helped T. rex move quickly. The animal was able to run at speeds of up to 15 mph (24 kph). T. rex had about 200 bones, roughly the same number as humans. Fossils of different Tyrannosaurus species have been found in Montana, Texas, Utah and Wyoming, as well as Canada (Alberta and Saskatchewan) and Mongolia in Asia. Some scientists consider the Mongolian variety of Tyrannosaur fossils to belong to a separate species, "Tarbosaurus bataar."
The universe is incredibly massive. Nevertheless, its mass must be spectacularly fine-tuned for life to be possible. Exactly how massive the universe is remained unknown until astronomers focused the Hubble Space Telescope on a patch of sky no bigger than a tenth the Moon's (angular) diameter, and held it there for some 278 hours. This Ultra Deep Field (see figure) successfully imaged all the galaxies (at least those bigger than dwarfs) that exist in that region. The field contains roughly 10,000 galaxies. By extrapolation, then, astronomers determine that the entire observable universe contains at least 200 billion galaxies. These galaxies contain an estimated average of 200 billion stars each. The total number of stars in these galaxies, then, is 40 billion trillion. The unobserved dwarf galaxies would contribute an estimated additional 10 billion trillion. Thus, the total number of stars in the observable universe adds up to about 50 billion trillion. Fifty billion trillion stars-that's an unimaginably enormous universe. And yet the universe is more massive by far. The stars, both those that are still shining and those that have burned out, account for just one percent of the universe's total mass! One reason the universe must be so massive is that life requires it. The density of protons and neutrons determines how much of the universe's hydrogen fuses into heavier elements. With a slightly lower density (producing fewer than about 50 billion trillion observable stars), nuclear fusion would be less productive and at no time in cosmic history (either in the big bang or in stars) would elements heavier than helium be produced. Or, if the density were slightly higher (producing more than about 50 billion trillion observable stars), nuclear fusion would be so productive that only heavier-than-iron elements would exist. Either way, life-essential elements such as carbon, nitrogen, oxygen, and phosphorous would be too scarce or nonexistent. Another life-related reason the universe must be so massive is that the cosmic mass critically influences the universe's expansion rate. If the mass density were smaller, the influence of gravity would be too weak for stars like the Sun and planets like Earth to form. On the other hand, if the mass density were greater, only stars much larger than the Sun would form. Either way, the universe would contain no stars like the Sun or planets like Earth, and life would have no possible home. The required fine-tuning is so extreme (one part in a quadrillion quadrillion quadrillion quadrillion) that if one were to remove or add a single dime's worth of mass to this vast cosmos, the balance of the observable universe would be thrown off and physical life would not be possible. Such amazing fine-tuning suggests the involvement of a supernatural, superintelligent Creator.
Protest movements have had a long history in the United States. When a group of people feels they are being marginalized, sometimes they will gather in a show of numbers. Parades and speeches are the usual forum, but sometimes protests turn violent; see riot, hooliganism. One of the earliest and most famous protests in American history was the Boston Tea Party, a protest against British-imposed tax laws known as the Townshend Acts. In more recent American and world history, protests have become a favorite tactic of liberals and revolutionaries. A frequent pattern is to announce a "peaceful" protest and then provoke the authorities into using a degree of force which appears excessive. This tactic worked well at Kent State, where after an alleged sniper attack, the national guardsmen opened fire on unarmed students, killing four and wounding nine others. see also Revolt
Africa is the world's second largest continent with 20 percent of the Earth's land mass. The African continent is home to a variety of ecosystems, from hot deserts, to tropical rain forests. Approximately half of Africa is covered by savannas of some sort (about five million square miles), beginning just below the tropic of Cancer and continuing down to the tropic of Capricorn. The circulation of the atmosphere over Africa is dominated by areas of high pressure centered over adjacent oceans around the Tropics of Cancer and Capricorn. These areas produce winds from East to Northeast, over the Sahara and the Kalahari. These regions are arid because they are occupied by dry, subsiding air for most of the year. Moist air moving into Africa, mainly from the South Atlantic and the Indian Ocean, is monsoonal in character. The humid, unstable air moves inland in summer. The seasonality of the rainfall is an extremely important determining feature in the climate almost everywhere in Africa. Grasslands can be broken up into two main categories, the first being tropical grasslands, called savannas, and the second being temperate grasslands. Savannas are characterized by grassland with scattered individual trees. Savannas occur in South America, northern Australia, and Africa. The savannas we are most familiar with are the East African savannas covered with acacia trees (Brown, 1972). Savannas can be caused by three different factors. The first factor is the climate, these are called climatic savannas, the second is the soil conditions, these are called edaphic savannas, and the third type of savanna is caused by people clearing forest land for farming, these are called derived savannas. In Africa the climate is the most important factor in what creates a savanna. Savannas are found in hot climates where the annual rain fall can vary from 20-50 inches of rain per year. The rainfall however must be concentrated into six to eight months
New research finds molecules responsible for cohclear hair cell regeneration in birds About 1 in every 20 people in the world experience “disabling hearing loss” according to the World Health Organization. Caused by the death of the hair cells lining the cochlea of the inner ear, hearing loss in all forms is estimated to affect 15 percent of the world’s population. New research from the University’s Medical School has provided a stepping-stone to the regeneration of cochlear hair cells and the restoration of hearing. Neuroscience and Cell Biology Prof. Jeffrey T. Corwin suggested that adaptations may be the reason why mammals, unlike many amphibians and birds, lack the ability to regenerate the hair cells responsible for hearing. “These cells tend to be vulnerable to loud sound, so if you’re exposed to very loud sound for a long period of time, it can actually kill them off,” Corwin said. “The problem is that our ears only make these cells before we’re born, which is different from animals like chickens, frogs and fish, which can automatically replace the cells and hook them back up with nerves in as quickly as 10 days.” Corwin and Benjamin R. Thiede, recent graduate of the University’s neuroscience graduate program, began to work with chickens in order to determine the force driving the regeneration of hair cells. Within the cochlea, pitch is detected by the stereocilia of the hair cells, so named because the projections look like little hairs. The stereocilia are organized shortest to tallest so that the shortest cilia, located closest to the incoming sound, receive higher frequency waves and the tallest, located further inward, receive lower frequency waves. The stereocilia vibrate upon receiving sound waves and transduce the sound waves into a mechanical wave. This wave is received by the basilar membrane of the cells, and then transduced into an electrical stimulus. “We tried to discover how these phenotypes are set up, how there’s a frequency sensitivity or differences in sound pitch,” Corwin said. “If all of these are set up and there’s a correct pattern, we can tell the difference between words like cat, bat, that, and hat based upon their pitch.” After numerous trials of genotyping differing regions of the cochleae of baby chicks, Thiede found that two molecules, retinoic acid and Bmp7, are responsible for the differing functions of the stereocilia. “These discoveries can lead to predictions as to how to re-grow these cells — in a day in the future when it’s possible for scientists to regenerate these cells in humans,” Corwin said. “We are going to do the same experiments with mice to see if the mammalian cochlea is similar to the avarian cochlea. The main goal is to come up with ways to regenerate hair cells for those who have lost hair cells for age or whatever reason.”
Written specifically for education studies students, this accessible text offers a clear introduction to placements and work-based learning, providing an insight into work in schools and education settings. Including case studies to illustrate the diversity of placements and workplace opportunities, it explores the theory and practice of working in educational contexts and supports students as they develop the skills and aptitudes that enhance their employability. With the aim of helping students to prepare for and get the most out of their work placements, chapters include: Part of the Foundations of Education Studies series, this textbook is essential reading for students undertaking courses in Childhood Studies, Child and Youth Studies and Education Studies. Section A: Planning for placements and work-based learning in Education Studies 1. Overview of placement and work-based learning in Education Studies 2. The context of placement and work-based learning 3. The nature of work-based learning on placement 4. Preparing for your placement 5. Assessments and integrating your learning with the rest of your studies Section B: Placements and work-based learning in context 6. Placements in schools on a core module 7. Placements in cultural settings for Education students 8. Learning on field trips and study visits 9. Working with students with SpLD / Dyslexia on placement 10. Learning from placements: the international dimension 11. Youth and community work placements 12. Placements and work-based learning in Early Years settings
RNA and Protein Synthesis 4. Other modifications of the primary transcript are possible such as mRNAs with different sets of exons and in which bases are modified or changed in the original 5. The functional mRNA is transported to the cytoplasm where translation occurs on ribosomes bound to the endoplasmic reticulum of the cell. 25.3 Enzymatic Synthesis of RNA The basic chemical features of the synthesis of RNA are the following (Figure 25-3): 1. The precursors of RNA synthesis are the four ribonucleoside 5'-triphosphates (rNTPs): ATP, GTP, CTP, and UTP. The ribose portion of each NTP has an -OH group on both the 3' carbon atoms. 2. In the polymerization reaction, a 3'-OH group of one nucleotide reacts with the 5r-triphosphate of a second nucleotide; a pyrophosphate is removed and a phosphodiester bond is formed. This same reaction occurs in the polymerization of DNA. 3. The sequence of bases in an RNA molecule is determined by the base sequence of the DNA template strand. Each base added to the growing end of an RNA chain is chosen by base pairing with the appropriate base in the template strand; thus, the bases C, T, G, and A in a DNA strand cause incorporation of G, A, C, and U, respectively, in the newly synthesized RNA molecule. The RNA is complementary to the template DNA strand, which is coding (sense) strand or template strand. 4. The RNA chain grows in the 5' -» 3' direction, which is the same as the direction of chain growth in DNA synthesis. The RNA strand and the DNA template strand are also antiparallel. 5. RNA polymerases, in contrast with DNA polymerases, can initiate RNA synthesis, i.e., no primer is needed. . Only ribonucleoside 5'-triphosphates participate in RNA synthesis, and the first base to be laid down in the initiation event is a triphosphate. Its 3'-OH group is the point of attachment of the subsequent nucleotide. Thus, the 5' end of a growing RNA molecule terminates with a triphosphate. In tRNAs and rRNAs, and in eukaryotic mRNAs, the triphosphate group is removed. RNA polymerase consists of five subunits—two subunits and one each of /3, /3', and a total molecular weight of 465,000; it is one the largest subunit is easily dissociated from the enzyme and, in fact, does so shortly after polymerization is initiated. -free unit (« the complete enzyme is called the In this chapter, the name RNA polymerase is used when the holoenzyme is meant. Several different RNA polymerases exist in eukaryotes and are described below. cell contains 3000-6000 RNA polymerase molecules; the number is greater when cells are growing rapidly. In eukaryotes, the number of RNA polymerase molecules varies significantly with cell type and is greatest in cells that actively make large quantities of protein, e.g., 25.4 Prokaryotic Transcription The first step in prokaryotic transcription is the binding of RNA polymerase to DNA at a particular region called Model of RNA synthesis by RNA polymerase from the sense strand of DNA. See text for details.
An estimated 7.5-magnitude earthquake shocked and rattled residents of New Madrid, Missouri, on Dec. 16, 1812, leaving behind many cracks, or fractures, in the ground. A fracture in geologic terms is a broken part of the Earth’s crust. Fractures can be as small as a cracked boulder or as large as a continent. They can be caused by weathering, pressure or movements of the Earth’s crust. Depending on the size, how the fracture occurs and the brittleness of the geologic formation, fractures can be organized into several categories. Joints are a fracture where the rock breaks but doesn’t move. Joint fractures can be systematic, or straight and regular, or nonsystematic, which are irregular. Sheet or exfoliation joints are curved fractures that occur in extrusive volcanic rock. Extrusive rocks form from magma cooled slowly deep within the Earth. Columnar joints are fractures that isolate polygon-shaped columns of rock. Joints can be very small or they can be tectonic, running across a large region. A tensile fracture occurs when the edges pull apart as pressure is applied. Tensile fractures occur in brittle rocks that don’t have much ability to bend or fold when a force is applied. The break in the rock runs perpendicular to the pressure that is applied. To visualize this, imagine holding a cracker on the edges and snapping it in half. Tensile fractures may not create movement and are often also classified as joints. If the two edges move away from each other, the result is a tensile fault. Sciencing Video Vault A fault is a fracture where the two edges move during the fracturing process. Faults tend to be shear fractures where one piece of rock slides against the other. They can be strike-slip faults where the sides of the fracture slide against each other horizontally. They can also be dip-slip faults, where one side of the fracture slides up or down relative to the other. Finally, they can be oblique faults, where both types of movement happen. Shear fractures tend to happen in more ductile rock -- rock that can bend when moved slowly but that breaks under sudden forces. Tectonic Plates and Fault Lines Fractures are part of local and regional geology, but the crust of the Earth itself is broken up into a set of plates that touch each other at dynamic joints. The junctures of tectonic plates are where you find earthquake fault lines, volcanic eruptions and mountains being thrust up, among other features. These gaps between the plates are the largest fractures on Earth, and they control the form and movements of the continents.
How to Use Reading 2: Four Views of European American/American Indian Relations The following excerpts reflect the attitudes of four people important in the conflicts between European American settlers moving west and the American Indians who had traditionally lived there. Andrew Jackson to John McKee, 1794.1 Thomas Jefferson on the policy of "civilization," In 1811 Tecumseh traveled through the Southeast, attempting to gain recruits for the Pan-Indian movement. The following is an excerpt from his speech to the Cherokee.3 Behold what the white man has done to our people! Gone are the Pequot, the Narraganset, the Powhatan, the Tuscarora and the Coree.... We can no longer trust the white man. We gave him our tobacco and our maize. What happened? Now there is hardly land for us to grow these holy plants. White men have built their castles where the Indians’ hunting grounds once were, and now they are coming into your mountain glens. Soon there will be no place for the Cherokee to hunt the deer and the bear. The tomahawk of the Shawnee is ready. Will the Cherokee raise the tomahawk? Will the Cherokee join their brothers the Shawnee? Junaluska, Tochalee and Chuliwa were Cherokee chiefs. These were their responses to Tecumseh, 1811.4 We know that they have come to stay. They are like leaves in forest, they are so many. We believe we can live in peace with them. No more do they molest our lands. Our crops grow in peace.... Tochalee and Chuliwa: After years of distress we found ourselves in the power of a generous nation.... We have prospered and increased, with the knowledge and practice of agriculture and other useful arts. Our cattle fill the forests, while wild animals disappear. Our daughters clothe us from spinning wheels and looms. Our youth have acquired knowledge of letters and figures. All we want is tranquility. Questions for Reading 2 1. Why, according to General Jackson, did American Indians negotiate treaties? 2. Who are the "other sources" Jackson said settlers would turn to if the U.S. government did not help them fight Indians? 3. How did Thomas Jefferson think the policy of "civilization" would help European American settlement? 4. What events did Tecumseh refer to in order to get the Cherokee to join him? Why? 5. What method did Tecumseh advocate to stop European American expansion? 6. What reasons did the Creek chiefs give for not joining Tecumseh? 7. How did Jackson's and Tecumseh's view of the origins of European American/American Indian conflict compare?
There are three terms often used in precision practices and they are often used incorrectly or in a vague manner. The terms are accuracy, repeatability, and resolution. Because the present discussion is on machining and fabrication methods, the definitions will be in terms related to machine tools. However, these terms have applicability to metrology, instrumentation, and experimental procedures, as well. Precision engineering deals with many sources of error and its solution. Precision is the most important think in the manufacturing field. Machining is the important part of manufacturing process. Many factor like feedback variables, machine tool variables, spindle variabls,wokpice vaiabls,envronmantal effect thermal errors etc.. affect the accuracy of machine. Main goal of precision engineering is to reduce the uncertainty of dimensions. Achieve the exact dimension is vary difficult . So tolerance is allowed on work piece.Recommend this topic
Scientists at the National Eye Institute, part of the Federal government's National Institutes of Health, have discovered how a defective gene in mice leads to blindness. Correcting this deficiency may, in the future, restore vision to people who are born with blinding conditions called retinal dystrophies. In these disorders, there is a breakdown in the process that allows us to see. A paper detailing these findings is published in the December 1998 issue of Nature Genetics. "This research provides hope for some people with severe retinal dystrophy, such as Leber's congenital amaurosis, who are born blind or lose their vision in early childhood," said Dr. Carl Kupfer, director of the National Eye Institute. "These results may allow scientists someday to develop treatments that will restore vision to people who have been blind for most of their lives." Researchers previously knew that the gene RPE65 produces a protein, also called RPE65, that is essential for normal vision. This RPE65 protein is confined to the retinal pigment epithelium cells, that process vitamin A in the visual system. But why is this protein essential? What contributions does this protein make to the visual system? To answer these questions, scientists "disrupted" the RPE65 gene in mice, that is 95 percent similar to the RPE65 gene in humans. The researchers found that the RPE65-deficient mouse was not producing a form of vitamin A called 11-cis-retinal that is essential for vision. This, in turn, led to a breakdown in the visual cycle, causing severe blindness. "This research gives scientists a fundamental understanding about the underlying causes of severe retinal dystrophy," said Dr. Michael Redmond of the National Eye Institute's Laboratory of Retinal Cell and Molecular Biology and the principal investigator of the study. "Scientists first need to understand the cause of vision loss before they can develop treatments. If we can reverse the blindness in mice, we can think about reversing the blindness in humans." In the RPE65-deficient mice, Dr. Redmond found that the eye's rod photoreceptors - which allow us to see in dim light - were not working. But why not? Upon further investigation, Dr. Redmond and colleagues discovered that the rod photoreceptors were not producing a crucial ingredient called rhodopsin, which rod photoreceptors need to convert light into signals that are sent to the brain. It is these neuronal signals that allow us to see. Dr. Redmond and colleagues discovered that the reason rhodopsin was not being produced was because one of its main components, 11-cis-retinal, was not being generated. They concluded that severe blindness resulted in the mice because the RPE65-deficient retina did not produce the 11-cis-retinal vitamin A, which was essential to produce rhodopsin. The researchers also discovered that unlike the rod photoreceptors, the eye's cone photoreceptors - which allow us to see in bright light - were not affected by the lack of the RPE65 gene. "This tells us that the cone photoreceptors in mice must have their own supply of 11-cis retinal vitamin A," Dr. Redmond said. "But where is it coming from? Both the rods and cones need 11-cis retinal vitamin A to work correctly." The research revealed another important discovery: Vitamin A accumulates in the RPE65-deficient retinal pigment epithelium cells. In the normal visual cycle, the retinal pigment epithelium cells convert the dietary form of vitamin A into the visual 11-cis retinal form. "When you eat a carrot, vitamin A (which is converted from the carrot's beta-carotene) goes to the retinal pigment epithelium cells, which convert the vitamin into 11-cis retinal," he said. "The 11-cis retinal is then combined with another protein (opsin) to make rhodopsin. But the defective retinal pigment epithelium cells do not convert the vitamin A into 11-cis retinal - the vitamin just accumulates in these cells. Scientists have been searching for the enzyme that converts the dietary form of vitamin A to the 11-cis form. This is a major challenge, but these mice tell us that RPE65 is a major player in this conversion process. "When you take away the RPE65 protein from the retinal pigment epithelium cells in animals, the conversion process of vitamin A stops dead in its tracks," Dr. Redmond said. "By reasoning, this indicates that the RPE65 protein is central to that conversion." Dr. Redmond said that these findings "provide a possible avenue for treating people with severe vision loss caused by defects in the RPE65 gene. Despite the deficient RPE65 gene, the photoreceptors appear to survive. Perhaps if we replace the defective RPE65 gene with a normal gene, we can use the framework already in the eye and restore visual sensitivity. It won't be easy, but we have a starting point." The National Eye Institute, part of the National Institutes of Health, is the Federal government's lead agency for vision research and supports between 70-80 percent of basic and applied vision research in the United States.
Whereas is usually used for comparing the features of two different things, when there is an important difference between them. Whereas can be used when comparing anything: people, objects, places, actions etc. You can use it in a sentence in two ways: As a conjunction, to join two sentences together when there are 2 different subjects: - I like beer whereas she likes wine. - I’m good at snowboarding whereas she’s good at skiing. As a conjunction, at the start of a sentence where you are comparing 2 different things: - Whereas I like blue, she likes green. - Whereas smaller shops often have more expensive products, larger supermarkets can often put their prices down. Whereas used in a legal context Whereas can also be used for legal purposes, in the preamble (introduction) of a legal text, usually a contract. This is called a ‘whereas clause’, and ‘whereas’ is used as an introductory word – it isn’t used to compare things. Instead, it means “considering that” or “given the fact that”. The ‘whereas clause’ isn’t a legal part of the contract’s operative provisions – it simply ‘paints the picture’ or ‘sets the stage’, it gives an introduction to the situation. Examples of whereas used in a legal context: - Whereas party 1 has made allegations against party 2 that party 2 recklessly, negligently and/or fraudulently misrepresented and omitted material facts. - Whereas party 1 is a dissolved corporation with a formal principal place of business in London. Whereas synonyms (non-legal sense) Since whereas is used as a conjunction, joining two ideas together in one sentence, it can often be replaced with other conjunctions. Whereas Vs. While Vs. But Vs. And While (when while is not used to signal time) ‘While’ is used in the same way as ‘whereas’ in a sentence, but only when ‘while’ is not used to mean “during the time that something else happens”. That means that you can replace ‘whereas’ with ‘while’ when it is used as a conjunction, but only when it is used at the start of a sentence: - While I work in accounting, she works in the sales department. - While I prefer going to the beach, I know a lot of people would rather spend their holidays in colder places. But is also used to connect clauses or sentences, however, unlike ‘and’, it is used to signal a difference between two different ideas. The main difference between ‘but’ and ‘whereas’, is that ‘but’ is used to signal a difference (or negative aspect), whereas ‘whereas’ is used to compare two things that are different. They can be interchanged in certain situations, for example: - I like pasta but / whereas she likes pizza. However, in other examples, but is used to present either the opposite of the original idea, or something negative about the original idea: - I like that restaurant but it’s a bit expensive. - She looked tired but happy. In these examples, when there is only one subject in the sentence, but can’t be replaced with whereas. ‘And’ is used to connect words, clauses or sentences that should be considered together. You can replace ‘whereas’ with ‘and’ when it is used in the middle of a sentence to compare two ideas that oppose one another (are different / contracting), and the meaning of the sentence won’t change much. However, using ‘and’ instead of ‘whereas’ means the ideas are simply being stated, rather than compared. - I like chicken and she likes fish. - I’m really good at running and he’s really good at swimming.
Ornamental pear trees (Pyrus calleryana), also known as Callery pears, are a group of fruiting trees in the rose (Rosaceae) family. They are grown for their ornamental features rather than fruit production. In fact, the fruit on ornamental pear trees are not edible. Ornamental pear trees grow fast, up to 15 feet in 10 years, and live only about 15 to 25 years. The most common cultivars are the Bradford and Capital varieties. - Skill level: Other People Are Reading Look at the tree's size and shape. Ornamental pear trees are shade trees, usually with a nice rounded shape. They are taller than they are wide. Bradford pear trees, for example, grow 30 to 50 feet tall and 20 to 30 feet wide. Observe its flowers. Ornamental pear trees bloom white flowers in the spring, usually even before the leaves begin to form. The five-petaled flowers grow in clusters and lack a good fragrance. The flowers are about ¾-inch wide and only bloom for a couple of weeks. Examine the fruit. They are small, up to ½-inch in diameter, rounded fruits that are yellowish green or brown. They are not pears and do not look like pears. Look at the leaves. They are oval, dark green and glossy. They typically turn reddish purple, mahogany red, orange red or bronze red in the fall. - 20 of the funniest online reviews ever - 14 Biggest lies people tell in online dating sites - Hilarious things Google thinks you're trying to search for
English Civil War King Charles I ascended to the thrones of England, Scotland and Ireland in 1625. At that time the monarch had the sole power to enact laws but relied on parliament to enforce the collection of taxes. Charles proved to be a poor politician. He treated parliament with contempt and as a consequence divisions grew between the king, who stubbornly refused to consider parliament's petitions, and parliament who were increasingly reluctant to cooperate in raising funds for the king. Puritans in parliament were suspicious that the king would undermine the Protestant character of the Church of England following his marriage to a Catholic princess. Divisions between the two sides came to a head when parliament attempted to impeach one of the king's military commanders after a failed intervention in support of the French Hugenots in 1627. Charles responded by dissolving parliament, but finding himself short of funds he assembled Parliament a year later and was forced to accept the Petition of Right in return for the funds he needed. Parliament was dissolved again in 1629, and Charles ruled without parliament for a further 11 years known as the Eleven Years Tyranny. Over the next decade Charles angered Puritans in England by introducing what they saw as Roman Catholic practices into the Church of England and by imposing fines on those who didn't attend Anglican services. When he attempted to introduce these reforms into Scotland he was met with a violent rebellion. The king's armies were defeated and he was forced to concede the independence of the Scottish Church. Charles tried twice more to defeat the rebels but this only resulted in the Scots armies occupying much of Northern England and Charles was forced to pay protection money to them to prevent the pillage of the area. Desperately short of money and with a weak army unable to defend England, Charles was forced to recall parliament in 1640. Parliament took advantage of the king's weak position by forcing him to recognise the right of parliament to assemble regularly, with the passage of the Grand Remonstrance, and restricting his tax-raising powers. Parliament inflicted further humiliation on Charles when they arrested and executed the king's chief advisor on a charge of treason for granting concessions to Irish Catholics, sparking a rebellion in Ireland. Enraged by parliament's actions Charles lead an attempt to arrest parliamentarians for treason. He failed, and now war was now inevitable. Fearing for his safety Charles left London, and toured the country to gain support for his cause, making Oxford his base. Parliament responded by recruiting its own armies. The royalists got the better of the early battles but their advances had been halted by the autumn of 1643. Charles reached a compromise with the Catholic rebels in Ireland to free up troops for the fight in England. In 1644 parliament made advances in the north thanks to a pact with the Scots, but suffered reverses in the west. In 1645 parliament formed the New Model Army under the command of Thomas Fairfax and Oliver Cromwell. At the Battle of Naseby in 1645 the royalist forces were decisively defeated. After further defeats the king placed himself under the protection of the Scots armies, who were involved in their own civil war. England was now under the control of an uneasy alliance between Parliament, the Scots and the Army, who had become a powerful independent force. Whilst a prisoner of the Scots, the king negotiated an agreement to introduce religious reforms in Scotland in exchange for supporting a royalist rebellion in England. In 1648 a series of royalist rebellions broke out in England. After some initial reverses Army forces lead by Fairfax and Cromwell achieved a series of crushing victories that culminated in the defeat of the royalist and Scots armies at Preston. Execution of the King Despite the betrayal by Charles most parliamentarians still believed that Charles could be retained as ruler; a minority argued that he could not continue. The Army took matters into its own hands and prevented from attending parliament most of the members still sympathetic to Charles. They then ordered parliament to try Charles for treason. He was found guilty and executed on 30 January 1649. The monarchy was abolished and a republican government was instituted in England under Oliver Cromwell, the Puritan general. Charles II succeeded his father as king of Scotland and immediately attempted to reclaim the English throne, but was easily defeated by Cromwell's army and narrowly escaped to exile in France. Charles I died well and was regarded by his followers as a saintly martyr, a status that made it easier for Charles II to restore the throne in 1660. Following the king's execution parliament was composed of a mixture of religious independents, presbyterians and conservatives. Greater toleration was granted for religious independents (although Catholicism was still repressed) and a number of religiously inspired laws were passed including the closing of theatres and the enforcement of Sunday observance. Meanwhile, Cromwell sought to eradicate opposition in Ireland where royalists had made an alliance with Irish Catholics. Between 1651 and 1653 Cromwell's army completed a brutal conquest of Ireland. Acting on fears that parliament would begin to assert its independence from the army, Cromwell dissolved parliament in 1643 and replaced it with a hand picked parliament, but the new parliament was unable to find agreement between religious radicals and moderates and in December 1643 Cromwell dissolved parliament and installed himself as Lord Protector, effectively a military dictator. In the end, however, the monarchy was restored, but in a much weaker position compared to a greatly strengthened parliament. Charles II (the son of Charles I) became a popular king for his hedonistic lifestyle, a dramatic change after the puritanical Cromwell (who even banned Christmas). - Ashley, Maurice. The Greatness of Oliver Cromwell (1958). 382pp, a standard scholarly biography online edition - Bennett, Martyn. Oliver Cromwell (2006), ISBN 0-415-31922-6. excerpt and text search - Braddick, Mike. God's Fury, England's Fire: A New History of the English Civil War (2008) - Coward, Barry, ed. A Companion to Stuart Britain (2003) excerpt and text search - Coward, Barry The Stuart Age: England, 1603-1714, (2003). ISBN 0-582-77251-6. Survey of political history of the era. - Davies, Godfrey. The Early Stuarts, 1603-1660 (1959). online. Political, religious, and diplomatic overview of the era. - Donagan, Barbara. War in England, 1642-1649. (2008) 443 pp. ISBN 978-0-19-928518-1. - Firth, C.H. Cromwell's Army (1902), online edition - Gardiner, Samuel Rawson. Oliver Cromwell (1901). ISBN 1-4179-4961-9. Classic biography. online edition - Gardiner, Samuel Rawson. History of the Great Civil War, 1642-1649 (4 vol 1898) online edition from Google - Gaunt, Peter. The Cromwellian Gazetteer: An Illustrated Guide to Britain in the Civil War and Commonwealth (1998), 256pp; heavily illustrated; covers the scenes of military conflict such as battlefields, castles, fortified houses and churches, defended and besieged towns and cities - Gentles, Ian. The English Revolution and the Wars in the Three Kingdoms (2007) - Macaulay, Thomas Babington. The History of England from the Accession of James II, 5 vols. (1848); classic narrative; one of the best written history books ever; vol. 1, vol. 2, vol. 3, vol. 4, vol. 5 - Macinnes, Allan. The British Revolution, 1629-1660 (2005), 337pp ISBN 0-333-59750-8. - Morrill, John. "Cromwell, Oliver (1599–1658)", in Oxford Dictionary of National Biography, (2004) online to subscribers - Stoyle, Mark. Soldiers and Strangers: An Ethnic History of the English Civil War (2005) - Woolrych, Austin. Britain in Revolution 1625-1660 (2002), ISBN 0-19-927268-6. excerpt and text search - Young, Peter and Richard Holmes. The English Civil War, (2000) ISBN 1-84022-222-0. excerpt and text search, a military history - "Civil War" from Open University and BBC; a wide-ranging popular overview
Making better solar cells: Cornell University researchers have discovered a simple process – employing molecules typically used in blue jean and ink dyes – for building an organic framework that could lead to economical, flexible and versatile solar cells. The discovery is reported in the journal Nature Chemistry. Today's heavy silicon panels are effective, but they can also be expensive and unwieldy. Searching for alternatives, William Dichtel, assistant professor of chemistry and chemical biology, and Eric L. Spitler, a National Science Foundation American Competitiveness in Chemistry Postdoctoral Fellow at Cornell, employed a strategy that uses organic dye molecules assembled into a structure known as a covalent organic framework (COF). Organic materials have long been recognized as having potential to create thin, flexible and low-cost photovoltaic devices, but it has been proven difficult to organize their component molecules reliably into ordered structures likely to maximize device performance. "We had to develop a completely new way of making the materials in general," Dichtel said. The strategy uses a simple acid catalyst and relatively stable molecules called protected catechols to assemble key organic molecules into a neatly ordered two-dimensional sheet. These sheets stack on top of one another to form a lattice that provides pathways for charge to move through the material. The reaction is also reversible, allowing for errors in the process to be undone and corrected. "The whole system is constantly forming wrong structures alongside the correct one," Dichtel said, "but the correct structure is the most stable, so eventually, the more perfect structures end up dominating." The result is a structure with high surface area that maintains its precise and predictable molecular ordering over large areas. The researchers used x-ray diffraction to confirm the material's molecular structure and surface area measurements to determine its porosity. At the core of the framework are molecules called phthalocyanines, a class of common industrial dyes used in products from blue jeans to ink pens. Phthalocyanines are also closely related in structure to chlorophyll, the compound in plants that absorbs sunlight for photosynthesis. The compounds absorb almost the entire solar spectrum – a rare property for a single organic material. "For most organic materials used for electronics, there's a combination of some design to get the materials to perform well enough, and there's a little bit of an element of luck," Dichtel said. "We're trying to remove as much of that element of luck as we can." The structure by itself is not a solar cell yet, but it is a model that will significantly broaden the scope of materials that can be used in COFs, Dichtel said. "We also hope to take advantage of their structural precision to answer fundamental scientific questions about moving electrons through organic materials." Once the framework is assembled, the pores between the molecular latticework could potentially be filled with another organic material to form a light, flexible, highly efficient and easy-to-manufacture solar cell. The next step is to begin testing ways of filling in the gaps with complementary molecules.
Possible Fates For Stars After the Giant Phase The Red Giant phase of a star's life cannot last for ever. There are three possible states for a red giant star when all it's fuel is exhausted. The fate of each star depends on it's mass. If a star has less than about four solar masses, the remnant after the Red Giant phase will have a mass below 1.4 solar masses. The Red Giant forms a planetary nebula – a confusing term since no planets are formed as a result. The outer envelope of the star is blown off and disperses into space to leave a white dwarf behind – basically a planet sized giant atom. Electrons stop the star collapsing further. The white dwarf is very hot. It eventually cools and becomes a black dwarf – all but invisible. If a star has a mass greater than about four solar masses, the remnant will have a mass greater than 1.4 solar masses. This is above the 'Chandrasekhar Limit', the greatest mass that can be supported by electrons - the mass limit for a white dwarf. The star explodes in a supernova, shining as brightly as a whole galaxy briefly. If the mass left is less than about three solar masses then it can be supported by neutrons and the result is a neutron star (also called a pulsar) – basically a giant nucleus 10 or so km in diameter. If it is heavier then the result is a black hole and can only be detected by indirect means.
Irregular sleep-wake rhythm is one of many circadian rhythm disorders, and is one of the more uncommon ones. Contrary to most people who will have one main sleeping period and one main period of wakefulness during a typical 24 hour stretch, people with irregular sleep-wake rhythms will have numerous instances of these periods during a typical 24 hour day. These would most often be considered naps, and they may have as many as 3-4 napping periods each day, with no main sleeping period. This sleeping pattern is most easily associated with babies, who take numerous naps throughout a day, though babies also tend to have a main sleeping period, and get as many as 12 hours or more of sleep in a typical day. The overall amount of sleep acquired during a 24 hour period is often equal to those people with more regular sleeping patterns, but it amounts in less deep sleep, which is required for many of the body’s natural regenerative processes, and may come into conflict with social or professional obligations, leading to feelings of isolation and depression. It can further lead to the development of poor eating habits, memory loss, and other symptoms typically caused by a lack of deep sleep. Irregular Sleep Wake Rhythm is quite rare, and is often the result of a weak body clock, as are many of the circadian rhythm disorders. It may also be the result of neurological problems, and neurological conditions such as brain damage, dementia and mental retardation may lead to the onset of this disorder. A doctor should be advised regarding this disorder, as it may lead to further mental health issues or sleeping problems, and will likely have an effect on your daily activities. The doctor will need to know when this sleeping pattern started, have a history of past medical conditions, and will need to be informed of any medication or drug use. You will likely need to undergo a neurological test to check for common ailments that may be causing the disorder. You may be asked to wear an actigraph for a short duration, which will chart your periods of activity and inactivity. A polysomnogram overnight sleep study is rarely needed to diagnose this disorder, but may be required to verify that no other sleeping disorders are present that may have led to the development of this disorder. Treatment plans for all circadian rhythm disorders on centered on heightening the sensitivity of your internal clock, and having it set to a 24 hour schedule. This can be a long process for those with irregular sleep-wake rhythms. The first step is to have your routine focused on one main sleeping period and one main period of waking. At first this may involve a slow reduction in the number of nap type periods, with an increase in the duration of each nap. Sleep logs will need to be kept during this time to ensure the plan is being followed properly, and that it is resulting in the desired changes. Light therapy will most often be used to help the body become conditioned to waking and sleeping based on the amount of light present. This will most likely be implemented after the sleeping pattern has been reduced to one or two periods per 24 hours. Other medications may also be prescribed to help attain longer sleep periods during the process of cutting back on the number of naps. This often includes melatonin before any sleeping period, but could also include sleeping pills. Once the desired single sleeping period has been achieved, these may or may not be phased out. Following proper sleep hygiene is of the most importance once the single sleep period has been set. There is always a risk of relapse into old patterns with most circadian rhythm disorders like Irregular Sleep Wake Rhythm, so following strict bedtimes and also waking times should be enforced. This includes setting an alarm to wake in the morning even on days when waking up at a specific time may not be required. You may also need to completely or severely limit your intake of stimulants and sedatives at all hours.
At its most basic, labor means work. Laborers are workers. Labor Day is a yearly celebration of the contributions that workers — whether in offices, on assembly lines, in mines or on factory floors — have made to make the U.S. the strong, prosperous country it is. In the 19th century, labor groups — called unions — began to form in many industries. They sought to help workers fight for fair pay and safe working conditions. Over time, these labor groups asked for special recognition for the role that American workers played in the advancement of the U.S. as an economic and social power. The first celebration of Labor Day occurred on Tuesday, September 5, 1882, when the Central Labor Union held a celebration of the working man. Two years later, in 1884, the Knights of Labor — a labor group in New York City — held a large parade to celebrate working people. Soon, labor groups around the U.S. began to ask states to recognize Labor Day as a holiday. In 1887, the states of Oregon, Colorado, New York, Massachusetts and New Jersey declared Labor Day to be an official state holiday. A few years later, in 1894, Congress established Labor Day as an official national holiday. Since Labor Day is a day of rest for many workers, celebrations also usually involve time spent with family and friends. Many families use the three-day weekend created by Labor Day to hold family reunions and barbecues or to take short trips before the summer ends.
Cellular is an animation that helps you make geometric sequences composed of square cells. Explore what happens when you draw graphs of quadratic equations with coefficients based on a geometric sequence. Watch the video to see how to sum the sequence. Can you adapt the method to sum other sequences? 2) What is the area of the blue portion of this figure? 3) What is the area of the orange portion of this figure? If you continue the pattern, can you predict what each of the following areas will be? Try to explain your prediction. Now imagine that instead of the pattern growing, we start with a square and the pattern continues inwards - with the circles and squares becoming smaller and smaller. If the areas of the four blue shapes labelled A, B, C and D are one unit each, what is the combined area of all the blue shapes? Explain any reasoning you have used.
There are eight planets (Pluto was consider a planet between 1930 and 2006, and is currently considered a dwarf planet) in our Solar System and a megastar – the Sun. While all planets revolve around the Sun in different orbits, there are few that have distinctive features as compared to that of the others. With these interesting facts about Saturn, let’s gather more information about this gas giant. 1. They call it beautiful: Saturn is also known as the “Jewel of the Solar System,” because of its beautiful rings and appearance. “PIONEER 11 WAS THE FIRST SPACECRAFT TO REACH SATURN.” 2. It’s far: it is the 6th planet from the Sun and also the farthest that can be seen with naked eye. 3. The direction of rotation: Saturn rotates from West to East which is also the direction of Earth’s rotation. 4. Massive in size: in size, Saturn is the 2nd largest of all planets, behind Jupiter. 5. Rings are many and spectacular: Saturn is famous for its rings that are huge but very thin. 6. How many rings? Saturn has seven main rings with spaces between them. Also read: facts about Uranus 7. The composition like that of Jupiter: composition of Saturn is not much different from that of Jupiter. Is also contains hydrogen and helium; and methane in small proportions. 8. Why is it flat at the poles? it is the flattest of all planets in the Solar System. Because of its high speed of rotation and gaseous composition, Saturn has observed this form. To give you more detail about this, note that the distance from the planet’s center to its equator is 60,300 km while that from its center to its poles is 54,000 km. 9. Days at Saturn: Saturn has the second shortest day behind Jupiter. The length of a day on Saturn is 10 hours 32 minutes; and, did you know how scientists measure this data from various planets? They just spot a crater and wait for the crater to rotate back in view, thus determining the length of a day on a planet. However, in Saturn’s case, as there is no solid mass on the planet, they had to take into account the planet’s magnetic field for that matter. 10. Around the Sun: due to the enormous distance at which the planet is positioned with respect to the Sun, Saturn takes 29.4 Earth years to make one revolution around the Sun. 11. Days vs years: on Saturn, days are short and years are long as compared to that on the Earth. 12. Visitors to the planet: only four space crafts have studied this planet. This might be due to the fact that it is at a long distance from Earth as compared to other planets. And they do not have any landing site on the planet because of the lack of solid surface. And the planet’s hot gasses won’t even allow any spacecraft unscathed either. 13. Windy and noisy: winds on Saturn can blow as fast as 1800 km per hour while that on the Earth are slower than this. 14. ‘Saturn vs Earth’ in size: given Saturn’s diameter, almost 750 Earths could fit in Saturn and 1600 Saturns could fit inside of the Sun. 15. Day of the week: Saturday–the 6th day of the week–is named after Saturn. 16. A popular moon: Enceladus — one of Saturns’ moon — is the shiniest object in the Solar system. It is mainly because of the fact that it is made up mostly of ice that reflects almost all of the light that falls onto it. 17. A powerful magnetic: Saturn’s magnetic field is 578 times more powerful than Earth’s magnetic field. 18. History speaks: Saturn was mentioned in the oldest written records by the Assyrians from 700 BCE. They named Saturn, “Star of Ninib”, a sparkle in the night sky. 19. Naming the planet: its name comes from: Saturn is named after the Roman god of Farming – Saturn. 20. Atmosphere matters for life: Titan, another of Saturn’s moon is the only moon to have a substantial atmosphere. Its atmosphere is 370 miles deep. 21. Comparing the moons: Titan is also the second largest moon in the solar system after Jupiters’ Ganymede. And it is larger than Mercury (the smallest planet in the Solar System.) 22. Seasons: Saturn generates its own heat and seasons on the planet are not dependent on the Sun. This could be because of its long distance from the Sun. 23. Hot or cold: -178 degree Celsius is the average temperature on Saturn. 24. A pressure cooker: atmospheric pressure on Saturn is 100 times than that on the Earth. 25. Core’s temperature: Saturn’s core is as hot as the Sun. Also read: facts about Mars 26. How fast? Saturns’ average velocity is 9.64 km per second while that of Earth’s is 30 km per second. 27. Light or heavy? Saturn is the least dense planet in the solar system. It can easily float in a pool of water, provided we could build one to accommodate a planet of this size in it. 28. Sky gazing: with the help of a telescope, Saturn’s rings can be easily seen from Earth. 29. Constituents of its rings: did you know that Saturn’s rings contain particles that can be as small as dust particles and as large as mountains? 30. Ring vs ring: there are many rings that revolve around Saturn and interestingly, their speed varies. The difference in the speed of these rings could be because of the difference in the weight of the particles that are revolving in them. 31. At its core: rock, ice, and water — at the center of Saturn under intense pressure and heat, make for a solid core. 32. What color: Saturn is light brown in color. 33. How its rings came into existence: the planet’s rings were formed from asteroids, comets, and moons that were shattered by the massive Saturns’ powerful gravity. 34. Galileo’s observation: when Galileo Galilei first observed Saturn in 1610, he saw a pair of an object on either side of the planet, which led him to draw the conclusion that Saturn was triple-bodied. Of course, his observations were limited due to his use of a basic telescope that did not have high magnifying powers as compared to the one that was later used by Christiaan Huygens to conclude that Saturn had rings around it. 35. Distance from the Sun: about 1.4 billion km (886 million miles) or 9.5 AU. 36. NASA’s Cassini–the nuclear-powered–aircraft will be destroyed by the US space exploration agency on 15th September 2017. NASA has to take this step in order to defend life on Saturns’ moon–Enceladus–where they suppose Alien life exists. If the spacecraft is not destroyed in a controlled fashion, chances are that it will hit this moon and may cause damage to alien life which scientist think is in existence on Enceladus. The Cassini spacecraft is a hefty $3.26 billion investment which NASA had put into orbit in 2004 after its launch in 1997. Quick facts about Saturn |Date of Discovery||Unknown| |Discovered by||Known by the Ancients| |Orbit around Sun||1,426,666,422 km| |Volume||827,129,915,150,897 cubic kilometer| |Density||0.687 g per cubic meter| |Surface area||42,612,133,285 kilometer square| |Surface gravity||10.4 meter per second square| |Escape velocity||129,924 km/h| |Effective temperature||-178 degree Celcius| |Tilt of axis||26.7 degrees| |Mean orbit velocity||34,701 km/h| |Equatorial Radius||58,232 km|
Contact angle, θ, is a quantitative measure of the wetting of a solid by a liquid. The instrument of choice to measure contact angles and dynamic contact angles is a optical tensiometer. A force tensiometers can also be used. Both optical and force tensiometers enable static and dynamic contact angle measurements. Contact angle, θ, is a quantitative measure of wetting of a solid by a liquid. It is defined geometrically as the angle formed by a liquid at the three-phase boundary where a liquid, gas and solid intersect. The well-known Young equation describes the balance at the three-phase contact of solid-liquid and gas. γsv = γsl + γlv cos θY The interfacial tensions, γsv, γsl and γlv, form the equilibrium contact angle of wetting, many times referred as Young contact angle, θY. From the figure below, it can be seen that the low contact angle values indicate that the liquid spreads on the surface while high contact angle values show poor spreading. If the contact angle is less than 90° it is said that the liquid wets the surface, zero contact angle representing complete wetting. If contact angle is greater than 90°, the surface is said to be non-wetting with that liquid. Contact angles can be divided into static and dynamic angles. Static contact angles are measured when droplet is standing on the surface and the three-phase boundary is not moving. Static contact angles are utilized in quality control and in research and product development. Contact angle measurements are used in fields ranging from printing to oil recovery and coatings to implants. When the three-phase boundary is moving, dynamic contact angles can be measured, and are referred as advancing and receding angles. Contact angle hysteresis is the difference between the advancing and receding contact angles. Contact angle hysteresis arises from the chemical and topographical heterogeneity of the surface, solution impurities absorbing on the surface, or swelling, rearrangement or alteration of the surface by the solvent1 2. Advancing and receding contact angles give the maximum and minimum values the static contact angle can have on the surface. How to measure contact angle Both static and dynamic contact angles can be measured by using Theta optical tensiometer. In practice, a droplet is placed on the solid surface and the image of the drop is recorded. Static contact angle is then defined by fitting Young-Laplace equation around the droplet, although other fitting methods such as circle and polynomial can also be used. Dynamic contact angles can be measured by using two different approaches; changing the volume of the droplet or by using tilting cradle. Figure 2 (a) shows the principle of the volume changing method. In short, a small droplet is first formed and placed on the surface. The needle is then brought close to the surface and the volume of the droplet is gradually increased while recording at the same time. This will give the advancing contact angle. The receding angle is measured the same way but this time, the volume of the droplet is gradually decreased. In Figure 2 (b), the principle of the tilting cradle method is shown. The droplet is placed on the substrate, which is then gradually tilted. The advancing angle is measured at the front of the droplet just before the droplet starts to move. The receding contact angle is measured at the back of the droplet, at same time point. Attension supplies three instruments for optical tensiometry: Optical tensiometry is the best choice when liquid-solid interactions and/or solid properties (such as surface free energy) are measured on samples that are not homogeneous and regular in size and shape. The main benefits are following: - Small liquid volume demanded (only some microliters) - Solid substrate can be asymmetrical (e.g. contact lenses) - Both sides of the sample don’t need to be similar (coating and surface modification characterization) - Contact angle mapping over the surface enables heterogeneity and cleanliness characterization - Roughness correction possible e.g. by using 3D Topography module Video of Contact Angle Measurement using Theta Optical Tensiometer Dynamic contact angles can be measured by using Sigma force tensiometer. Force tensiometer measures the mass affecting to the balance when a sample of solid is brought in contact with a test liquid. The contact angle can then be calculated by using the equation below when surface tension of the liquid (γl) and the perimeter of the sample (P) are known. Wetting force = γl P cos θ In the figure below, a complete contact angle measurement cycle is presented. As can be seen, with force tensiometer the measured contact angle is always dynamic contact angle since the sample is moving against the liquid. When the sample is immerged to the liquid the advancing contact angle is recorded and when the sample is emerging the receding contact angle is measured. The graph of force/wetted length vs. depth of immersion will appear as follows: - The sample is above the liquid and the force/length is zeroed. - The sample hits the surface. For the sample as shown, with a contact angle < 90°, the liquid rises up causing a positive force. - The sample is immersed, buoyant force increases causing a decrease in force on the balance. Forces are measured for advancing angle. - After having reached the desired depth the sample is pulled out of the liquid. Forces are measured for receding angle. The force tensiometry is an excellent choice when dynamic contact angles on the homogeneous, regularly shaped samples need to be measured. Analysis of single fibers, such as wetting of hair, is handled easily by our sensitive force tensiometer. You can find more information about fiber wettability here. In addition, force tensiometer is the best choice, when lose powder or pigment-wetting properties are measured. For more details about powder wettability and the Washburn method, see: Video of Dynamic Contact Angle Measurement using Sigma 700/701 Force Tensiometer Contact angle results can be used to calculate surface free energy of the solid substrates. For more information, see:
- As usual Mr.Backe asked us if we needed help on a question. Then Elijah answered yes. So together we solved #17 from 6.1 in our textbooks. a) Make a table of values for the first five rebound heights of the previous one. (First look at what we already know. I highlighted them.) What we did was draw a picture and worked from there using a T-chart. *P.s i can't draw the picture right now, but ill try doing the chart. B | 0, 1 , 2, 3, 4, 5 Remember that we start with 0 because the ball was just dropped and bounced. # cm | 2m, 413m, 819m, 16/27m, 32/81m, 64/43m We got those numbers by multiplying the previous bounce height by 2/3 because "of" means multiplying. 2/3 x 2/1 = 4/3m 2.3 x 4/3 = 8/9m 2/3 x 8/9= 16/27m 2/3 x 16/27 = 32/81m 2/3 x 32/81 = 64/243m b) What is the height of the fourth rebound bounce? c) Is this a linear relation? No. I know this because if we were to graph it it would show a curve if we lined the dots. That means its not a linear relation. Why? Because when we draw lines connecting dots, in linear relations it would be a straight line. - We learned how to extrapolate and interpolate. Interpolate : Looking for a missing value inside the given data. - By the end of the class we learned 3 ways to solve linear relations by graph, formula, and inspection.
Book T of C Chap T of C Modern psychologists use several distinct approaches to the scientific study of behavior and mental processes: What are four basic approaches used by psychologists? Biological explanations are based on knowledge of living cells and organic systems. Brain scanning technologies have revolutionized research of this type. Scientists have increasingly detailed knowledge of cell interactions, chemical influences on the nervous system, and brain/behavior relationships. Behavioral explanations emphasize relationships between the organism and its environment plus the organism's history of learning. The "environment" is conceived as stimulation that can be measured. The organism responds with behaviors that also can be measured. Behaviorists once confined their attention to exterior, observable behavior. Now most consider thoughts and emotions as "hidden behavior" which can be measured and manipulated almost like observable behavior. Cognitive approaches stress information processing. Cognitive psychologists study the mental representation of thoughts, images, knowledge, and emotions. The word "representation" refers to the brain's storage of memories, images, perceptions, thoughts, and other mental contents. What is phenomenology? Subjective approaches to psychology describe unique thoughts, feelings, and experiences of individuals. Subjective approaches include phenomenology (phe-NOM-in-OL-o-gy), which takes the individual's experience as a starting point. If we ask you to report how it feels to be reading this text, for example, that is an investigation of phenomenology. How is psychology "by nature an integrative science"? One might argue that all four perspectives are relevant to almost all areas of psychology. Anxiety, for example, can be studied as a biological response, a behavior, a thought process, or an experience. Psychology is by nature a very integrative science, employing a variety of perspectives on the same phenomena. Prev page | T of C | Next page Don't see what you need? Psych Web has over 1,000 pages, so it may be elsewhere on the site. Do a site-specific Google search using the box below. Copyright © 2007 Russ Dewey
In the history of art and design the late 19th and early 20th centuries saw the development and formation of the major Modernist currents that played an important role in the cultural, political and social life of Europe. The contemporary concept of design originated from the industrialization era that first began in England in the early 19th century and then later in Germany in the mid-nineteenth century. Many new technologies were developed during that period. Over time, people began to move away from established traditions, which showed primarily in art where avant-garde and Expressionism prevailed (8). Modernist trends influenced the contemporary concept of design, suggesting a close relationship between design and pictorial art. The period in question is considered the time of mechanization and machine-building, the era of world fairs (Weltausstellungen) and international competition. (Der Glaspalast, London; Die Eiffelturm, Paris). At the start of industrialization in Europe, manufactured goods featured some ornamentation (combinations of different patterns without semantic concept; no novelty) and were made of cheap, low quality materials, thus triggering a reform in design that was often associated with different movements seeking to improve living conditions. Thus, the manufacturing industry’s objective was to make products ordinary people could afford. However, the well-to-do Europeans, on the contrary, demanded aesthetically wrought product that could not be selling well enough to the masses. Economy and trade pursued sales of competitive product in the international markets. Background of Design Development in Germany The initial situation in the 19th century Europe was the dominance of historicism and old styles: gothic, romantic, classical… In contrast to this trend, in the second half of the 19th century a movement knows as Arts and Crafts began to evolve, founded by William Morris. He was an artist and critic, sided with socialists and opposed industrial mass production (4). He was also one of the first environmental activists (6). Morris, together with his friend, art theorist John Ruskin, developed a new theory. Morris believed that mass production results in low-quality product, social problems and environmental pollution, as well as aesthetic and social disruptions. To deal with these issues, he initiated a reform in the art domain, whereby, in his view, applied arts and crafts were to take utility objects to a high aesthetic standard. He also argued against excessive historicism, advocating clearly formulated shapes made of natural materials instead (5). Many artist guilds embraced this theory as a model. As a result, innovatively engineered objects came into everyday use (5). The Arts and Crafts movement produced a strong influence on other art movements in Germany, such as Jungendstil, Werkbund and Bauhaus (6). From 1895 until the World War I, Jungendstil, or Art Nouveau, was an international movement that went under different names. In England it was popularly known as Decorative Style, in Belgium and France as Art Nouveau, and in Germany the name was derived from the Jugend magazine title (6). Jungendstil popularized jewellery shaped as stylized plants, particularly lily and nenuphar flowers loaded with symbolism (4). Jungendstil centres in Germany were Munich, Darmstadt and Weimar. In Munich, the group was founded in 1892 in protest against official Academy art. The group’s endeavours resulted in the emergence of furniture and architecture design. Jungendstil artists made drawings for Simplicissimus, a then popular political satire magazine. In Weimar, the group was created in 1902 by a Belgian architect and artist Henry van de Velde who later on founded the Saxon-Weimar School of Applied Arts (6). Overall, Jungendstil was a progressive artistic current. Many Modernist designers defied industrial manufacturing and realized themselves in crafts. Deutscher Werkbund was a German association of artist, architects, designers, and industrialists based in Munich that partnered with product manufacturers such as the Dresden Craftsmen Workshops. This facilitated the departure from decorativeness in favour of functionalism, and a focus on streamlined, pure design that was extra-temporal and regime-independent (4). Important for the group was the work of one of its founders, Richard Riemerschmid, an architect, artist and critic, as well as of the renowned founder of modern industrial architecture Peter Behrens. Eventually, Werkbund went through a conflict over standardization versus individualization, which pushed the German Werkbund to the brink of dissent. In subsequent years, increased standardization led to a stronger functionalism. Therefore, the German Werkbund is regarded as a bridge between Jungendstil and Art Nouveau. It strongly influenced contemporary industrial design. Bauhaus. As early as during the World War I, anti-capitalist Walter Gropius, who fought in it as a soldier, abandoned the idea of industrial manufacturing that drove the German Werkbund (5). After the war, Gropius founded a training institution – the centre of art consultancy for industry, commerce and crafts (6). In 1919 a Bauhaus school was founded in Weimar through the merger of the Saxon-Weimar School of Fine Arts and the Saxon-Weimar School of Applied Arts founded by Henry van de Velde. The job of directing the new school was given to Walter Gropius, a young architect from Berlin, who right from the start invited a galaxy of artists to teach at Bauhaus: among them were the Swiss artist Johannes Itten, American artist Lyonel Feininger, and sculptor Gerhard Marcks. From that moment the Bauhaus school became a symbol of art reform and the synthesis of arts (1). Following its curriculum, the first year students studied theoretical fundamentals and at least one craft. Not only applied artists taught at Bauhaus; the school also engaged prominent expressionists such as Paul Klee and Wassily Kandinsky (7). In 1923 an argument between Gropius and Itten resulted in the school restructuring. Itten leaves Bauhaus, and its expressionist phase comes to the end. Itten was substituted by the Hungarian constructivist artist László Moholy-Nagy, who led a metalworking studio and taught a pre-entry course at Bauhaus. In the beginning of 1925, on the initiative of Lord Mayer Fritz Nesse, Head of Dessau municipal council, Bauhaus moves to Dessau to function as a municipal school. Gropius announces a new program that asserts the dominant role of industry and science in design. In June, Bauhaus published its first book, Bauhausbuher, authored by Walter Gropius, László Moholy-Nagy, Paul Klee, Wassily Kandinsky and Piet Mondrian. Already by autumn, Bauhaus Co. Ltd released its commercial products. In October 1926 Bauhaus received official accreditation from the Dessau government, and its staff was awarded professorships. Thus, Bauhaus became known as the School of Design, with curriculum now corresponding to that of a university (5). In 1932, Bauhaus ceases to operate in Dessau, yet Ludwig Mies van der Rohe maintains the Bauhaus tradition in a private institution in Berlin, which counted only 14 students by the winter semester. Wassily Kandinsky, Anni Albers, Ludwig Hilberseimer, Reich and Peterhans still remained in the teaching staff. The Bauhaus idea of the functional design was to be consistently developed in the United States. The German Bauhaus school left a remarkable footprint in the evolution of design in all areas, with some of its elements relevant to this day (1). Thus, the key design development milestones in Germany in the late 19th and early 20th centuries were set by design schools such as Jungendstil, Werkbund and Bauhaus that played a major role in shaping contemporary design. During that period, design followed two evolutionary paths: the rejection of artificial solutions and the dominance of naturalness; and the use of new technologies in manufacturing. This, in turn, resulted in the use of functional design in industry, and in attaining both high quality workmanship and aesthetics of industrial products. That was the time when contemporary design was taking shape in Germany and in other West European countries. 1. Bauhaus Archive Berlin. The Bauhaus Collection. Berlin, 2010. 2. Bernhard E. Bürdek. Design. Geschichte, Theorie und Praxis der Produkt-gestaltung. Schweiz, 2005. 3. Cambell Joan. The German Werkbund. USA, Princeton university press, 1978. 4. Eckstein Hans. Formgebung des Nützlichen. Marginalien zur Geschichte und Theorie des Design. Düsseldorf, 1985. 5. Charlotte &Peter Fiell. Design des 20.Jahrhundert. Köln, 2013. 6. Hauffe Thomas. Design. Köln, 1995. 7. Morteo Enrico. Design-Atlas – von 1850 bis heute. Dumont, 2009. 8. Read Herbert. Kunst und Industrie. Hatje, Stuttgart, 1934.
Vaccinium, genus of about 450 species of shrubs, in the heath family (Ericaceae), found widely throughout the Northern Hemisphere and extending south along tropical mountain ranges, especially in Malesia. The shrubs are erect or creeping, with alternate deciduous or evergreen leaves. The small flowers resemble those of the true heaths (Erica), but the ovary is beneath the flower. The flowers are single, clustered, or in long spikes in the leaf axil. The berries are usually edible. More than 40 species of Vaccinium shrubs occur in North America, especially in the northern and mountainous parts. The highbush blueberry (V. corymbosum) and other species of blueberries are found in the eastern United States and adjacent Canada. The cowberry (V. vitis-idaea), also known as red whortleberry, or mountain cranberry, grows in northern Canada. Several species occur in the Rocky Mountains region. More than 10 species are found in the Pacific states, including the western blueberry (V. occidentale), the red bilberry (V. parvifolium), and the California blueberry (V. ovatum). Four species occur in Great Britain: the bilberry (V. myrtillus), also called blaeberry, or whortleberry; the bog bilberry (V. uliginosum); the small-fruited cranberry (V. oxycoccus); and the cowberry (V. vitis-idaea). All are widely distributed throughout Europe, Asia, and North America. See also bilberry; blueberry; cranberry.
As devastating as it may appear, fire is a natural process, and Joshua Tree National Park has endured centuries of lightning-caused fires. Although fire in deserts has been less common than in forests because shrubs and trees are widely spaced in deserts and grasses not as abundant as in wetter areas. The park maintains records of fires dating back to 1945. Most of these fires occurred between May 18 and September 20 when desert vegetation was very dry. Seventy-four percent of the fires were ignited by lightning. The remaining 26 percent were human caused. The number and intensity of lightning fires has increased over the past 50 years. Before 1965 most lightning fires burned less than one-quarter acre. After 1965 more large fires and more frequent fires have been recorded. In 1979 the Quail Mountain fire burned 6000 acres; in 1995 the Covington fire burned 5158 acres. And in 1999, the largest fire in Joshua Tree's history, the Juniper Complex fire burned 13,894 acres of slow-growing California junipers, pinyon pines, and Joshua trees. Exotic grasses, such as red brome and cheatgrass, now represent up to 60 percent of the biomass from annuals. Resource managers believe the increased fuel loads provided by these exotic grasses are responsible for carrying lightning-ignited fires from plant to plant. Desert plants do not need fire to reproduce and most are highly susceptible to fire. Shallow roots are easily burned and seeds lying on the ground waiting to germinate are destroyed. The desert does grow back but recovery after a fire is slow. Joshua trees can live for hundreds of years, and if one burns, it will take a hundred years for another to take its place. Even small shrubs like blackbrush may require 50 years to return to a burned area. Non-native grasses are also able to quickly recover after a fire and are usurping the habitat of native grasses. The key to managing fire in Joshua Tree is in understanding how wildfires affect vegetation and wildlife in a desert environment where non-native grasses may have substantially altered the local ecology. Biologists are monitoring the long-term consequences of these newly arrived plants. To help preserve and protect wildlife, scenery, and natural processes, each park develops its own Fire Management Plan. At Joshua Tree, we are revising our plan to provide for full suppression of all fires, including those naturally caused, until we have a better understanding of fire behavior and effects in the park. Although fire plays a beneficial, even a critical role, in some ecosystems, that may not be the case at Joshua Tree under these new conditions.
Periodically, the media will announce that the surprise discovery has suddenly been made that the Amazon Basin had been the home of “advanced, spectacular civilizations.” In reality, this fact has been well known in the archaeological community for decades. But the general public knows still knows little about it, so the media continue to treat each new discovery of ancient civilizations in the Amazon as a surprise. New discoveries continue to be made. Since the original version of this article was written in 2006, major discoveries have been made in Santarem, Brazil, on the lower Amazon, in the upper Purus River region of Brazil and Bolivia, in Chachapoyas, Peru, and in San Martin de Samaria in Peru. Archaeologists today estimate that the pre-Columbian population of the Amazon Basin was as high as 20 million – far more than live in the Amazon today, even including the large cities such as Belem, Manaus, and Iquitos. Yet these dense and organized populations had a very different relationship with the natural world than most recorded civilizations. There are four major types of ecological zones in the Amazon. Each one has given rise to a way of life adapted to it. The four zones are the varzea, or fertile floodplains; the upland forests that lie above flooding; the savanna; and the blackwater ecosystems. All of these zones except the blackwater ecosystems have given rise to civilizations. The Varzea: land of river cities The varzea is also called the whitewater floodplain. (“Whitewater” rivers in this context means nutrient-rich rivers, washing down soil from the Andes.) These rivers seasonally flood and leave silt upon the land. These river-fertilized soils are the most fertile in the Amazon Basin, but because of the seasonal flooding, have the shortest growing season. The crops that were grown on the river-fertilized soil were varieties bred to mature during the half year of dry season. As much manioc was produced in four to six months as could be produced in a year and a half on the terra firme. Not surprisingly, the combination of the most fertile soils with an abundance of aquatic resources made the shores of the Amazon River and its major whitewater tributaries the most densely populated zones of the regions. The Amazon and its major tributaries were also major trade routes; pottery shards testify to the widespread trade conducted by the Omaguas in particular, who lived at the headwaters of the Amazon but had trade networks stretching for thousands of miles. (Lathrap 1974) However, the cities along the Amazon were actually seen and recorded by the chroniclers of the first European expedition, which was led by the conquistador Francisco de Orellana in 1541. Orellana described the Amazon as a busy waterway which had, on both sides of the river, populous towns with elaborate temples, plazas and fortresses. His chronicler, Fray Gaspar de Carvajal recorded cities that extended for miles along the banks of the major rivers of the floodplain. He relates that, for one stretch of 80 leagues (275 miles) they found people “all speaking one language and densely populated with towns and villages with scarcely more than a crossbow shot between them. Some of the towns extended for five leagues (17 miles) without any separation between the houses.” Many roads led to the interior. A few settlements were located on the flood plain where during rainy season they were accessible only by canoe. In one place “inland from the river, at a distance of two leagues, more or less, there could be seen some very large cities that “glistened in white.” Villages were composed of communal houses each occupied by an extended family. The towns and villages were organized into confederations which traded and fought with another. The Spaniards’ accounts describe a superabundance of food. Carvajal wrote that, in one village, they found enough meat and fish and cassava bread “to feed an expeditionary force of a thousand men for a year.” Turkeys, ducks, and parrots were raised in the villages, and ducks were hunted by the thousands using nets. Fish were obtained in great abundance, and manatees were a favorite prey. Turtles, each “larger than a good sized wheel,” were raised in corrals, estimated to contain sometimes six to seven thousand animals. Turtle and cayman eggs were eaten. Wild rice and water lily seeds and tubers were harvested. Carvajal added that “what is more amazing is the slight amount of work that all these things require.” The Omagua people produced arts and crafts of a very high level, especially pottery, described by Carvajal: “plates and bowls and candelabra of this porcelain of the best that has ever been seen in the world… all glazed and embellished with colours, and so bright that they astonish, and, more than this, the drawings and paintings which they make on them are very accurately drawn just as with the Romans.” This was primarily the land of the Omagua, who were a well-organized and apparently an aggressive and expansionist power along the western length of the Amazon River. At the other end of the Amazon River, where it meets the Atlantic Ocean, an island larger than the country of Switzerland lies in the river’s mouth: the island of Marajo. On this island, archaeologists have found evidence of a large-scale but decentralized civilization. In a huge cave called Painted Rock Cave, signs of human culture have been found dating back as far as 13,000 years. Ceramic bowls found in Painted Rock Cave and other places in the area are the oldest known pottery in the Americas, and there is evidence that four thousand years ago, the Indians of the lower Amazon were growing at least 138 crops. There are mounds 1,800 years old, elaborate road systems, and artificial ponds and canals. Anna C Roosevelt, curator of the Field Museum in Chicago, who has excavated the site, says that the mound-building culture lasted well over a thousand years, had possibly well over 100,000 inhabitants and covered thousands of square miles. “They have magnitude. They have complexity. They are amazing, and they are not primitive,” says Roosevelt. Amazonia, Roosevelt says, “was a source of social and technological innovation and continental importance.” Upland Forests: the land of Ayahuasca The upland or terra firme forests of the Amazon (meaning areas above flooding, not a contiguous zone) are extremely heterogeneous, an extremely diverse range of microecosystems. They have the greatest number of species and the greatest accumulation of plant biomass on the planet” (Moran 1993:58). The region (known as the “eyebrow of the jungle”) where the rainforest meets the foothills of the Andes, from Colombia to Bolivia, is the most biodiverse region on Earth. The Upper Amazon was the cradle of horticultural diversity for both the Andes and the Lower Amazon. It was a natural laboratory for developing the science of breeding diverse varieties of plants adapted to microecosystems. This science, highly developed and systematized much later by the Incas, made it possible to develop crop varieties adapted to the extremely varied growing conditions and made highland Andean civilization possible. There is, in fact, credible evidence that the civilizations of the highland Andes, as well as the other civilizations of the Amazon, had their original roots in the Upper Amazon. (See Lathrap, The Upper Amazon, 1970.) This is the home of such famous groups as the Shuar (the “Jivaro headshrinkers”), the Shipibo, and the Ashaninka. Most of the ancient cities of the region (such as Puyo, Ecuador, ancient capital of the Puyo Runa or “Cloud People”) were made of biodegradable materials and have disappeared. One city whose stone ruins still exist is Kuelap, ancient capital of the Chachapoyas culture of northern Peru. The Quechua- or Kichwa-speaking peoples (Runa) have a key place in Amazonian as well as Andean history. Quechua is most famous as the “language of the Incas,” because it was the official language of Tawantinsuyu, the Inca Empire. It served as the shared second language of communication in the Andean highlands, among many different peoples who spoke many different native languages. And the Amazonian Kichwa or Quechua speakers collectively comprise probably about 1% of the Quechua-speaking population. The popular assumption (mentioned as fact in some tourist guides) is that Amazonian Kichwa speakers (since they speak an “Andean language”) originated as post-Inca or post-conquest migrants from the Andes. However, the linguistic evidence is conclusively strong that Kichwa was not first introduced to Ecuador by the Incas, but that it was already being used in Ecuador and nearby parts of Peru long before the Inca Empire arose — as a trade language along the Napo River. Linguistic evidence suggests that Kichwa was first used in Ecuador, including both the Amazon region and the highlands, at least eight centuries before the arrival of the Incas in Ecuador — that is to say, Kichwa may have been spoken in the Amazon as long as fourteen hundred years ago. The Incas arrived in that region less than five hundred years ago. The Puyo Runa, or Cloud People, remember their ancient capital of Puyo on the upper Pastaza River, where today lies the present-day city of Puyo, the capital of the present-day province of Pastaza, Ecuador. In present-day Ecuador, there is a mountain pass called Papallacta where highland Indians and lowland Indians. Its name translates as “potato town” because potatoes were the major trade item brought by highland Indians. The Napo River begins below Papallacta Pass and flows down to finally join the Amazon River near present-day Iquitos. Thus, the Napo River connects the highlands with the Amazon River. This appears to be the region of the most active contact and cultural interflow between the Andean highlands and the Amazonian lowlands. Highland influence is conspicuous on the music of the Napo Runa and on the women’s traditional dress. The highland curanderos of Ecuador, for their part, incorporate many elements of Ayahuasca shamanism into their curing rituals, without using Ayahuasca itself. The Napo River, by all evidence and scholarly consensus, appears to be the original home of the Ayahuasca vine (Banisteriopsis caapi) and of the cultural form known as Ayahuasca shamanism that is now widespread in the Upper Amazon. However, it does not appear to the the place where DMT-containing admixture plants (Psychotria viridis and Diplopterys cabrerana) were first combined with the Ayahuasca vine. (Highpine 2013) Yet, collectively, the Indian peoples of the Upper Amazon (Colombia, Ecuador, Peru, and far western Brazil) seem to have been much more resilient than groups in most other areas of the Amazon. This region coincides with the use of Ayahuasca, and with the deepest plant shamanism in the world. There may be no connection between this and the survival of the Upper Amazonian peoples…. or there may be. The Savanna: earthworks and forest islands In the savannas of the southwestern Amazon Basin — the Beni province of Bolivia and nearby regions of Brazil – are some of the richest discoveries of what is possible through traditional sustainable horticulture. In the Llanos de Moxos region of the Beni, indigenous peoples built a vast infrastructure of earthworks that enabled their culture to flourish over several thousand years. (Erickson 2000). Archaeologists have uncovered massive raised field systems, elevated causeways, transportation canals connecting river systems, pyramid-like mounds, clusters of odd, zigzagging ridges scattered through the savanna that may have been fish farms, and other earthworks. Raised fields are connected in groups of islands, aligned in a north-south direction. Mounds, as high as nearly 60 feet, rise above the floodwaters. (Mann 2000a) Trees grow on the causeways and mounds, protected from the savanna’s seasonal fires and floods. Soils were enriched by burning, mulching, and depositing wastes, and are filled with fragments of pottery. Although these peoples abandoned their earthworks from four hundred to seven hundred years ago, Erickson and others argue that the process of ecological change begun by the Beni mound builders continues to this day. They permanently transformed regional ecosystems, creating “a richly patterned and humanized landscape” that is “one of the most remarkable human achievements on the continent.” They grew crops on raised fields; practiced agroforestry, planting groves of palm, nut, and fruit trees; and raised fish and apple snails.. Erickson estimates both from the amount of labor that had to have gone into these works and from the potential crop yields on these permacultured mounds that the population just of this corner of Bolivia would have been in the hundreds of thousands. “The quantity and mass of material deposited indicates that a lot of people were responsible, creating the mounds over a period of at least 2000 years,” beginning 3000 to 5000 years ago. Pottery shards — the only artifact not subject to decay — show that the villages were linked by trade networks that stretched over thousands of miles and reveal a complex mosaic of societies linked by networks of communication, trade, alliance, and perhaps warfare. In 2003, the upper Xingu region of the southern Amazon in central Brazil archaeologists discovered an ancient network of large villages linked by roads in a carefully organized, gridlike pattern..The villages were built around large, circular central plazas, and were defined by curbs, moats and ditches up to 1.5 miles long and 16 feet deep. The villages were evenly spaced two to three miles apart in in a “galactic” pattern around a hub. Straight roads — as wide as 165 feet in some places, the width of a modern-day four-lane highway — lead out from them at specific angles, repeated from one plaza to the next. described by Heckenberger (2003) as “gridlike or latticelike organization of nodes (plazas) and connecting thoroughfares.” “This kind of elaborate regional plan would have required the relatively sophisticated ability to reproduce angles over large distances.” “The sophistication of the layout bespeaks a knowledge of mathematics, architecture, astronomy, and engineering.” Where the villages converged on wetlands, bridges, moats, canals , causeways and artificial ponds were found, many of which are still in use today. The biggest villages had residential areas as large as 200 acres. Between the villages were open parklands and working food forests. . Heckenberger estimates that each cluster of six to twelve villages supported between 2500 and 5000 people, and the complex, geometrically patterned set of interlinking roads radiating out of the plazas show that there must have been a great deal of social interaction among the villages, which implies that all of the villages were occupied simultaneously. Having mapped all of the sites within a 15 mile by 15 mile square, Heckenberger and colleagues tentatively estimate that the population of the region numbered in the tens of thousands. Heckenberger characterized the areas between the villages as “saturated anthropogenic landscapes.” They had great fortified cities — according to Heckenberger — “with a complicated plan, with a sense of engineering and mathematics that rivalled anything that was happening in much of Europe at the time.” … “The Xinguano people built their villages according to a very clear plan, at a very large scale, and all of them are interconnected with one another. The Kayapo, a group about a hundred miles north on the Xingu River studied by Darrell Posey, still practice a related system: The Kayapo recognise ecosystems that lie on a continuum between the poles of forest and savanna. They have names, for example, for as many as nine different types of savanna – savanna with few trees, savanna with many forest patches, savanna with shrub, and so on. But the Kayapo concentrate less on the differences between zones than on the similarities that cut across them. Marginal or open spots within the forest, for example, can have microenvironmental conditions similar to those in the savanna. The Kayapo take advantage of these similarities to exchange and spread useful species between zones, through transplanting seeds, cuttings, tubers and saplings. Thus there is much interchange between what we tend to see as distinctly different ecological systems. Kayapo agriculture focuses upon the zones intermediate between forest and savanna types, because it is in these that maximal biological diversity occurs. Villages too are often sited in these transition zones. The Kayapo not only recognise the richness of these zones, but they actually create them. They exploit secondary forest areas and create special concentrations of plants in forest fields, rocks outcroppings, trail sides, and elsewhere. The creation of forest islands, or Apêtê, demonstrates to what extent the Kayapo can alter and manage ecosystems to increase biological diversity…. Apêtê look so “natural”, however, that until recently scientists in fact did not recognise them as human artifacts. Purus River and Santarem New discoveries continue to be made. In 2010, in the Purus River region that stretches from northern Bolivia to the state of Amazonas in Brazil, as described here and here, scientists have documented more than two hundred and ten geometric structures, some of which may date as far back as the third century A.D. They are spread out over an area that spans more than two hundred and fifty kilometers. Blackwater ecosystems: the fish forest The blackwater ecosystems the most barren and nutrient-poor ecosystems in the Amazon. Unlike the “whitewater” rivers that that bring nutrients from the Andes to fertilize the floodplains, “blackwater” rivers are nutrient-poor, and flooding does not enrich the soil as it does in the varzea. In the blackwater ecosystems, the primary food source is fish. And the fish depend on the forest for their food supply. River margins provide food for fish — leaves, fruits, flowers, seeds, insects, insect larvae, arachnids, crustaceans, and worms. At least fifty fish species feed almost exclusively on fruit that falls into the river. Other fish feed on insects which, attracted by the fruit, fall into the river. During flooding cycles, waters overspill their banks and allow fish into the flooded forests to feed. Fruit trees are planted to feed the fish and forests along the rivers are protected. Instead of the rivers replenishing the land through flooding, the forest replenishes the river. In effect, the forest is maintained as a grazing ground for fish. The Uananos of the Uaupes River of Brazil are acutely aware of the importance of food sources from the adjacent forest in maintaining fisheries: The Uanano describe fish spawning as a fruit-exchange dance. Any interruption of these dances or interference in the supply of fruits requisite to them is severely punished by retribution of the fish elders. While the adult fish are caught as they swim back from the “dances,” in exchange, the Uananos protect the offspring and preserve their food source — the forest. The Uanano depend upon the generosity of the fish and the forest and avoid offending them. (Chernela 1982) Writing of the Makuna of Colombia, who live farther upstream on the Vaupes River (called Uaupes in Brazil) says: In the Amazon, forest and river are closely linked. In an environment where considerable tracts of land are permanently or periodically inundated, it is difficult to tell where the forest ends and the river begins. The rain-forest with its myriad of waterways is thus very much an integrated whole, to which its animal inhabitants have adapted. Tree-dwelling species stay in the upper layers of the forest to escape flooding, while ground living animals have developed an amphibian capacity to move freely between land and water. The jaguar and its principal prey — tapirs, peccaries, and large rodents — are, for example, excellent swimmers, while other predators such as anacondas, caymans, and otters, live most of their lives in the rivers. This close interdependence between the life worlds of the river and the forest is reflected in the peculiar Makuna idea that fish and game animals may transform into one another. In their hunting tales fish at will walk up on land to feed on the fruits and seeds of the forest. Conversely, game can turn into fish and disappear into the depths of the rivers to escape the hunters. Therefore, they say, fish never have empty stomachs, and hunters often fail to track down game along the river beds. In the words of a Makuna shaman: When the fish travel along the river they visit the fish people of other houses, just like people visit one another in this world. The fish people go to drink and dance in each other’s houses. As they leave one house and enter another they take off the old dresses and put on new ones; each house is different, with its own name and history. The fish change accordingly. Even the river changes from one place to another; the water is here bitter and heavy, there light and sweet like the juice of sweet fruits. The fish also change with season; in the appropriate season they perform forest fruit rituals, make dabucurí feasts, and play their Yuruparí instruments. Therefore the fish has to be blessed differently according to season and place, depending on when and where it was caught. (Arnhem 1996:29) The main horticultural crop in blackwater ecosystems is bitter manioc. Bitter manioc cultivation solves one of the great problems of Amazonian populations: how to cultivate soils extremely poor in nutrients, extremely acid, and with toxic levels of aluminum. Manioc, a plant that appears to have evolved in just such areas of South America, can produce impressive results where nothing else will grow. Manioc is even adapted to drought, during which it loses its leaves and goes into dormancy, gaining its leaves again with the return of soil moisture. More than a hundred varieties of bitter manioc have been reported among blackwater populations. Bitter (toxic) manioc has been developed through selective breeding from the sweet (nontoxic) varieties. The toxic chemicals in bitter manioc (which must be processed out for human consumption) help to protect against insects and herbivores, so in the blackwater ecosystems, conscious selection favors the more toxic varieties. Blackwater ecosystems are the classic “counterfeit paradise” of the Amazon, fragile ecosystems that place severe limits on human population. They are drained (and flooded) by “blackwater” rivers that carry no fertilizing agents. Garden sites in blackwater regions cannot be used more than a single year without yields declining dramatically, and a garden clearing cannot be more than an acre or so in size, or it may not reforest itself. Thus, the spaces cleared for gardens must be small or the area cannot revegetate, because it needs the leaf litter from the surrounding forest in order to reforest. Without the leaf litter the surrounding intact forest, the soils would become either white sands (podsols) or brick-like laterites. Then the deforestation would become permanent. And when the forest cover is removed, these soils quickly erode, altering the river channel and depositing silt in the river. So the garden remains small and is moved every single year. And in blackwater ecosystems, it may take over a hundred years for the cycle to complete itself and the primary forest to return Thus (outside of areas of terra preta, see below) human populations in blackwater ecosystems must remain small and nomadic. Terra preta is a phenomenon found across all four types of ecosystems, although it is rare in the upland forests closer to the Andes. Terra preta (“black earth”), also called Terra Preta do Indio (“Indian black earth”) is the Brazilian name for certain highly fertile dark earths in the Amazon region created by indigenous peoples. Terra preta soils exist across a wide range of parent soil types — red or yellow kaolinite ferralsol, acrisol, sandy podzol, and terra preta is distributed throughout a wide range of Amazonian environments — black and whitewater ecosystems, bluff edges and headwaters, floodplains and terra firme. It is estimated that 10% of the Amazon Basin is terra preta. The area of terra preta already mapped is immense — twice the size of the UK. The properties and behavior of terra preta defy scientific understandings. Terra preta does not form naturally out of compost, even where composting is intentional. Contemporary settlements, even indigenous ones, do not create terra preta. Yet, terra preta seems to continuously regenerate itself. In Brazil, there are sites where prehistoric terra preta has been intensively farmed for nearly forty years with no addition of any fertilizer. Some scholars suggest that terra preta essentially represents a “living organism” because of its capacity to regenerate itself. Archaeologists have surveyed the distribution of terra preta and found it correlates with the places in which conquistador Francisco de Orellana’s chroniclers described seeing cities. Radiocarbon dating shows that the terra preta seems to start around the time of Christ, perhaps a few hundred years earlier. This is the same time that archaeologists first see complex polychrome pottery and evidence of mound building in the Beni and on Marajó Island. The abundance of pottery shards found in every deposit of terra preta, and the traces of ancient roads connecting them, demonstrate that terra preta correlates with intensive human occupation. Organic matter in terra preta averages 40 to 50 cm deep, but may be as deep as one to two meters (!) Radiocarbon dating demonstrates the extremely fast rate of terra preta formation — a meter of soil produced in just a few decades. It is calculated that 24500 tons of silt and algae or 9000 tons of mulch would be required to cover one meter of topsoil over one hectare, so this which implies high labor investment and complex social organization. In the modern world, intensive agriculture and population growth are associated with ecological destruction and soil decline – but in the Amazon, as a result of farming and as a result of population growth, the soils grew richer, not poorer. Terra preta today also appears to be preserving plant species that cannot survive elsewhere in the Amazon, thus helping to preserve and promote biodiversity. Besides creating continuing soil fertility, terra preta has another benefit that its creators cold not have foreseen: it helps to sequester carbon dioxide. The technology for creating terra preta seems to have been lost by present-day indigenous populations of the Amazon. Special inoculations of microorganisms were involved, but those bacterial cultures and the technologies for using them are today lost. But when the mystery of creating terra preta is solved, it may be one of the greatest gifts of the Amazonian native peoples to the world. The Times of Destruction The Pastaza Runa refer to the “Times of Destruction,” when they experienced the most severe population crash. accessibility, the density, and the networking of these civilizations made them extremely vulnerable to both epidemics and European slaving. The first epidemics quickly swept up and down the major rivers, where populations were most concentrated; the Amazon River itself, once the most densely populated zone of the Amazon Basin, had 100% population loss. The indigenous populations of the varzea became virtually completely extinct; their cities, made entirely of biodegradable materials, vanished into the earth without an archaeological trace. No indigenous populations remain along the Amazon River proper, and the impression passed down in the last few centuries was of a thinly populated river. The Zaparos, once a major power on the Pastaza River of Ecuador and Peru, are today reduced to about five Zaparo speakers. The Omaguas are today reduced to about ten speakers of the language living near Iquitos, Peru. The settlements found in the Xingu had thrived for eight hundred years, from 800 CE to 1600 CE, when the population crashed due to European epidemics. These epidemics wiped out the population before Europeans ever set foot in the Xingu. The Upper Xingu region is so remote that Europeans did not reach the area until more than two hundred years after the first colonists arrived in Brazil. By then, the villages were mostly abandoned, the people long since decimated by the spread of European diseases such as smallpox, measles and influenza. The Napo River is the most accessible part of the Amazon Basin. In fact, it was the first area penetrated by Europeans, and the first area hit by epidemics, that even preceded the Europeans themselves (the banks of the Napo River were already depopulated by the time Orellana saw it). The indigenous peoples who survived the epidemics are for the most part those were in the “boondocks,” on the fringe of the major Amazonian civilizations. Since then, tribes and communities have continued to be shattered by various destructive forces, from epidemics to missionary disruption to virtual enslavement on encomiendas or land grants, the Rubber Boom, and, in recent decades, massive colonization, deforestation, land losses, and the poisoning of rivers by petroleum companies. The stereotypical picture of the Amazonian Indian is of a naked hunter in the jungle, shooting poisoned darts at monkeys from blowguns. This picture is not inaccurate: indigenous Amazonians traditionally were, and as much as possible continue to be, hunters and gatherers, traditionally didn’t wear clothes, and many did and do hunt with blowguns and poisoned darts. The indigenous peoples of the Amazon are hunters and gatherers, but they are not only hunters and gatherers; they are not even mainly hunters and gatherers. They are mainly gardeners of the forest, but practitioners of a kind of gardening that obscures the distinctions between “wild” and “cultivated.” They practice a kind of horticulture that is not only sustainable, but has proven to be the only sustainable way of cultivating in a tropical rainforest environment, and that has actually helped to increase biodiversity. The modern Amazon rainforest is only 15,000 years old (having largely dried up during the Ice Age) and humans have been active participants in this ecosystem since the beginning. Humans have actively participated in creating the most biodiverse region in the world. All of the Amazonian countries today have aggressively promoted “colonization programs” in the Amazon, as a solution for their landless peasant problems, and envisioned the rainforest as supporting large-scale commercial agriculture, both cultivation and cattle raising. But traditional western agricultural practices – including permanent clearing and monocropping – has proven an ecological disaster, resulting in permanent deforestation that continues to spread as poor colonists move from exhausted lands to new areas The rainforest conservation community has responded for decades by stressing that the soils of the Amazon are so poor that the Amazon could only support a small population. The discovery that Amazonian Indians had large populations in the past was resisted for a time because it is believed that this could give the green light to even more intensive colonization of the Amazon. But colonization has been destructive because it is based on agricultural practiced completely unsuited to the rainforest. The sustainable practices used by the Indians, conscientiously applied, could support millions of people sustainably and without destroying biodiversity — but the greater the population, the more conscientiously the practices must be followed. Indigenous wisdom remains vital to our world. It is a living part of the human cultural wealth and can help to guide humanity in re-aligning its way of living with the world. Image: Rain, River, Forest, a Camp, by Morgan Maher
Researchers believe that low-mass stars such as the sun start to grow by dragging gas from their surroundings around them in a ball, which later flattens into a disk. If the same process of accretion is responsible for growing stars of 10 times the sun's mass or more, then the incoming ball of gas has to crumple much faster into a rotating donut or disk to release the enormous buildup of radiation in the still-forming star. The radiation could then escape perpendicular to the disk, taking some gas with it [see image above]. Despite identifying rotating gaseous disks around several massive young stars, researchers had never found a star exhibiting all three characteristics of the process--rotation, ejected gas and infalling gas. A group of astronomers had already determined that a young star of about 20 solar masses, G24 A1, has a gaseous torus and outflowing gas. To complete the trifecta the team directed the Very Large Array at the National Radio Astronomy Observatory in Socorro, N.M., to tune in on the star's ammonia, a marker for the densest material encircling it. Judging from a shift in the wavelength of radiation absorbed by the ammonia compared with that absorbed by the molecule at rest, the team concluded that the star's dense material is in motion, which strongly indicates that gas is rotating and falling into the star like water circling a drain, the group reports in the September 28 Nature. "We have detected all the elements that one would expect," says astronomer Maria Beltr¿n of the University of Barcelona, the report's first author. Some researchers have proposed that massive stars should often form by the collision of smaller stars, like water droplets fusing. "If you think massive stars form by collision you wouldn't expect to see a nice clean disk with jets coming out of it," says star-formation theorist Mark Krumholz of Princeton University. "It is pushing up the regime in mass range where accretion works," agrees observational astrophysicist John Bally of the University of Colorado at Boulder, but "it still leaves quite open in my mind what happens in larger stars," which are bright but very rare.
Comparisons Worksheet 4 In this comparisons worksheet, students examine 2 pictures and complete the sentences about them by choosing the correct words from the word bank. Words will be opposites. There are 3 questions. 3 Views 2 Downloads The Outsiders by S.E. Hinton Build a unit around The Outsiders, and use these materials to help! Included here is a group of prereading activities to choose from and a list of tasks. Different than specific lesson plans for The Outsiders, these tasks are made up of... 6th - 8th English Language Arts CCSS: Adaptable Using 5 Senses to Describe Details Should your young writers use more sensory details in their writing? Encourage class members to incorporate similes and metaphors to their poetic writing with a packet of worksheets focused on figurative language. Class members complete... 2nd - 5th English Language Arts CCSS: Adaptable
Once a rare occurrence, frequent or chronic inflammation of the middle ear has come to be seen as a "normal" aspect of childhood in the United States. Generally, ear infections are treated with antibiotic drugs. Often, ear tubes are implanted. Some doctors recommend the removal of tonsils and adenoids. Other doctors (especially in countries outside the U.S.) are beginning to advise "watchful waiting" for the less severe ear infection to see if it will resolve itself without the use of antibiotics. Based on feedback from families using the Feingold Program, children with behavior and learning problems appear to be very susceptible to ear infections. According to these families, removing the synthetic additives not only enabled their children to calm down and focus, but it also brought an end to chronic ear infections. When children eat foods with synthetic chemicals, some of them experience a sensitivity reaction that includes tissue swelling. If the cells in the Eustachian tubes swell, they can close up and prevent fluid from draining out of the inner ear. This means that any liquid in there will be trapped in a warm, dark environment; bacteria in the fluid will increase and this can lead to an infection. Some doctors now believe that the medicine typically given to children with ear infections actually brings on the next episode. Of course, most of these medicines contain artificial colors and flavors! There have been quite a few scientific studies connecting a change in diet with improvement of ear health. In a Polish medical journal, researchers reported the following: The frequency of hospitalization of infants and children for otitis media [middle ear infection] at the Clinic of Childrens Diseases of the Medical University of Lodz decreased from 22.6% in 1975 to 4.2% hospitalized in 1995. Said the authors, "It was caused mainly by the change of a way of nutrition from artificial to natural..." Wasowka-Krolikowska et al. J. Pol Merkuriusz Lek, 1998 Dec In a study published in Pediatrics in 1993, researchers followed over 1,000 infants for their first year and found that those who were breast fed for at least four months had a significantly lower rate of otitis media. Duncan, B, et al, Pediatrics 1993 May
Atomic Spectra Worksheet Answers is really a sheet of paper containing projects or questions which are designed to be performed by students. The Ministry of National Knowledge describes that Worksheets are generally in the form of directions, measures for doing a task. A task that’s purchased in the experience sheet should be clear the basic competencies which will be achieved. Worksheets can be students information that is applied to transport out analysis and problem fixing activities. Making Educational Worksheets should reference the basic competencies being shown or at least relating with the product that’s been taught. Worksheets can be interpreted as perform courses for students in facilitating learning. The fundamental intent behind applying Atomic Spectra Worksheet Answers is to provide a concrete experience for students. Supporting with education variations. Generating interest in learning. Increasing preservation of teaching and learning. Take advantage of time effortlessly and efficiently. You are able to look closely at the case Ap Chemistry Photoelectron Spectroscopy Worksheet with this page. Back To Atomic Spectra Worksheet Answers
What is it? - Social awareness is the ability to understand and respect the others’ perspectives, including those from diverse backgrounds and cultures. It is the ability to understand social/ethical norms and to recognize family, school, and community resources and supports. - Appreciating diversity - Respect for others Talk to your child about how kindness and gratitude are connected. - Example: “What are you grateful for today? I’m grateful because my co-worker helped me with my project today. Was someone kind to you? Did you help someone today or brighten up their day by doing something nice?” Share your family values with your child. - Example: “In our family, we value honesty, loyalty, generosity and kindness. We also respect others, and we always try to value their feelings and ideas. How are some ways that you can apply these values to your own friendships?” Discuss the importance of being polite. - Example: “When you are talking or interacting with anyone, be polite by listening patiently and not interrupting people when they speak. If your friend does something nice for you, don’t forget to say ‘thank you,’ and if you do something wrong, try to apologize.”
Investigate the triangles that can be formed using one side of three squares to build the triangle. - Students will determine that a right triangle exists when the sum of the areas of the squares built on the short sides is equal to the area of the square built on the longest side. - Pythagorean Theorem - acute triangle - right triangle - obtuse triangle About the Lesson This activity allows students to experiment with three squares to see if they can make a triangle using one side of each square. They are then asked to classify the triangles and conjecture about the relationships between the areas of the three squares that produced acute, right, and obtuse triangles. This activity is a geometric visualization of the Pythagorean relationship: if the sum of the areas of the two small squares is equal to the area of the large square, then the triangle formed by one side of each square will be right.
Lesson One: Introduction to Digital and Physical Archives - Before Teaching - Lesson Plan - Activities, Materials & Presentations - Curriculum Standards - Download Lesson Plan [PDF format] - Introduction to Digital and Physical Archives: Distance Learning Video Archives are facilities that house physical collections, where records and materials are organized and protected. Archival materials are used to write history. Through the internet, digital archives make those records more accessible to students, researchers, and the general public. Students learn to navigate a digital archive by browsing and performing effective keyword searches. Through this process, students learn how to use the Helen Keller Archive. They also learn the value of preserving information. - Understand the function and significance of an archive. - Describe the different capabilities of a physical and a digital archive. - Know more about how archives can increase accessibility for people with visual and/or hearing impairments. - Navigate the digital Helen Keller Archive using the Search and Browse tools. - What is an archive? - How do I use a digital archive? - Why are archives important? - Computer, laptop, or tablet - Internet connection - Projector or Smartboard (if available) - Worksheets (provided, print for students) - Helen Keller Archive: https://www.afb.org/HelenKellerArchive - American Foundation for the Blind: http://www.afb.org The Library of Congress images below can be used to illustrate and explain the Define an archive section of this lesson. Library of Congress: The Library of Congress Manuscript Reading Room Courtesy of the LOC Manuscript Division. The digital Helen Keller Archive homepage. Other Digital Archive Examples - Sports: Baseball Hall of Fame; primarily physical archive with partial photographic digital collection (https://baseballhall.org/about-the-hall/477) (https://collection.baseballhall.org) - Politics: United Nations; primarily physical archive with online exhibits (https://archives.un.org/content/about-archives) (https://archives.un.org/content/exhibits - Comics: Stan Lee Archives (https://rmoa.unm.edu/docviewer.php?docId=wyu-ah08302.xml) - History: Buffalo Bill Collection (https://digitalcollections.uwyo.edu/luna/servlet/uwydbuwy~60~60) - Dogs: American Kennel Club; primarily physical archive with partial digital collection (https://www.akc.org/about/archive/) (https://www.akc.org/about/archive/digital-collections/) - Art: Metropolitan Museum of Art Archives; physical archive with separate digital collections and library (https://www.metmuseum.org/art/libraries-and-research-centers/museum-archives) - Travel: National Geographic Society Museum and Archives (https://nglibrary.ngs.org/public_home) - National Geographic digital exhibits (https://openexplorer.nationalgeographic.com/ng-library-archives) - Space travel: NASA Archive; partially digitized (https://www.archives.gov/space) - Music: Blues Archive; partially digitized (http://guides.lib.olemiss.edu/blues) - Books: J.R.R.Tolkien; physical archive (https://www.marquette.edu/library/archives/tolkien.php) Ask and Discuss - Do you have a collection? Baseball cards, rocks, seashells, gel pens, shoes, vacation souvenirs? - Do you and/or your parents save your schoolwork or art projects? - Where and how do you store old photos? Text messages? - Personal collections are a kind of archive. - Things that you store and organize (to look at later) make up a basic archive. - If you wrote a guide for your friend to use when searching through your [vacation photos/baseball cards/drafts of your papers], you would be running an archive like the pros! - Optional: Select a sample archive to show students; options provided in resource section. Define an Archive - Optional: Use the definitions provided in the lesson definitions. - To be an archive, a collection must be: - Composed of unique documents, objects, and other artifacts; and - Organized to make sense of a collection so that people can find what they are looking for. - An archive is sometimes also: - Organized by an institution, managed by archivists, and made available to researchers. - Tells us about a person, organization, or physical things. - Typically held and protected in a physical repository, but may be made accessible electronically in a digital platform. What are the advantages of a physical archive, where you can have the materials right in front of you, versus seeing them on a screen? - Hands-on encounter with the past. For example, how would it feel to see/read from the original Declaration of Independence at the National Archives? - Analyze material properties of objects and manuscripts. - Wider range of access to all the items held in the archive (not all items are digitized). - Can flip through a physical folder rather than load a new page for every document. - What do you think is “easier”? - Have any students experienced something like this? What are the advantages of a digital archive, where you can have the materials available to you in digital format, on a website? - Accessible worldwide on the internet—you don’t have to travel to see what’s in the archive. - Keyword searchable. - Useful information in the format of transcriptions and metadata often included. - Accessible to people with disabilities, including those with impaired vision/hearing. - For example, the digital Helen Keller Archive allows users to change the text size and color of text and provides description for multimedia including photographs, film, and audio. Who is Archiving Information About You Right Now? - How is the public able to access that information now? In the future? - Is there information you would not want them to access now? In the future? Why? Using the Helen Keller Archive Open the digital Helen Keller Archive: https://www.afb.org/HelenKellerArchive Note: The digital Helen Keller Archive team strongly recommends that this or similar demonstration be included in the lesson, unless the teacher has formally taught these students browse and search techniques. We find that students are used to “Google” style searches, which are not as effective on specialized sites like digital archives. We are going to use the digital Helen Keller Archive. Who has heard of Helen Keller? Why is she famous? What did she do? - Keller lost her sight and hearing at a young age but learned to sign, read, write, speak, and graduated college. - She used her fame to advocate on behalf of blind and deaf communities, fought for education/employment for blind people and the inclusion of people with disabilities in society. - She was politically active: Anti-war, advocated for socialism and workers’ rights, as well as the suffrage movement and women’s rights. - Distribute student version of How to Search [download PDF] and How to Browse [download PDF] and explain that you will be going through a few sample searches as a class. Invite the class to follow along if feasible. - Pull up the Helen Keller Archive home page and ask the class to explain the difference between search and browse. For example: - The Browse tool follows the structure and order of the physical collection. Browse is the best way to see how an archive is organized and what it contains. - The Search tool uses a keyword search term or terms. Search is the best way to find a specific item. Show the Browse Function - Click the Browse tab. - Click Browse by Series; point out the series titles and ask students to explain what each “series” contains. - In this archive, series are organized based on the type of materials (letters, photographs, and more). - Explain that this is how a physical archive is organized (in series, subseries, boxes, and folders). - Browse for a type of item. Guide students through the choices they have at each level. - For example: “Browse the photographs in this archive. This series is divided into photographs and photo albums. Let’s explore the photographs. How are these organized? It looks like they are organized alphabetically by subject matter. Wow, there are two folders here just for Helen Keller’s dogs! Let’s take a peek.” - Optional: Ask students to browse for “boomerang given to Helen in Australia”. Show the Search Function - Click the Simple Search tab. - Ask the class to pick a word to search based on either their knowledge of Helen Keller or class curriculum on late 19th/early 20th century. - For example: Let’s search for documents related to the women’s suffrage movement. The best way to start a keyword search is with a simple keyword. Let’s use “suffrage.” - Point out the filters in the left hand column and explain how they are used narrow search results. Ask students to choose one area to refine search to narrow their results for a specific reason. - For example: “Let’s select 1910-1920 so we can find material written before the 19th Amendment was passed.” - Works like a library or e-commerce website. - Optional: Ask students to search for a speech given by Helen Keller while she was traveling abroad. She gave many – they can choose any one. Brainstorm effective search terms and ways they might refine their results, and warn students it will take more than one step to find a speech that qualifies. - Show the Browse by subject functions and ask how they are similar to, or different from, searching by Keyword(s). - Use same topic as keyword search (or as close as possible). For example: Can you find “suffrage” in this subject list? - Explain that not all topics will be present. For example, there is no subject header for “computers”. - Break students into working groups. - Assign each group a “scavenger hunt” item (see in class worksheet). - Optional: Collect scavenger hunt items in a private list to be shared with the whole class. Sample Scavenger Hunt List - Flyer for a 1981 dance production “Two In One” - Film of Helen Keller testing a new communication device in 1953 - Medal from the Lebanese government - Photograph of Helen Keller at a United Nations meeting in 1949 - Or choose your own … Activities & Presentations for Teachers Activities for Students - Exploring the Digital Helen Keller Archive [PDF format] - Exploring the Digital Helen Keller Archive – The Needle in the Haystack [PDF format] Materials (Students & Teachers) - Definitions: [PDF format] - Frequently Asked Questions [PDF format] - How to Search [PDF format] - How to Browse [PDF format] This Lesson Meets the Following Curriculum Standards: Evaluate the advantages and disadvantages of using different mediums (e.g., print or digital text, video, multimedia) to present a particular topic or idea. Conduct short research projects to answer a question, drawing on several sources and generating additional related, focused questions for further research and investigation. Gather relevant information from multiple print and digital sources, using search terms effectively; assess the credibility and accuracy of each source; and quote or paraphrase the data and conclusions of others while avoiding plagiarism and following a standard format for citation. Integrate and evaluate content presented in diverse media and formats, including visually and quantitatively, as well as in words. Empire State Information Fluency Continuum - Uses organizational systems and electronic search strategies (keywords, subject headings) to locate appropriate resources. - Participates in supervised use of search engines and pre-selected web resources to access appropriate information for research. - Uses the structure and navigation tools of a website to find the most relevant information.
Chemical equilibrium refers to the state wherein both the reactants and the products present in the concentration have no tendency to change with the period of time during a chemical reaction. A chemical reaction achieves chemical equilibrium when the rate of forward reaction and that of the reverse reaction is same. Also, since the rates are equal and there is no net change in the concentrations of the reactants and the products – the state is referred to as a dynamic equilibrium and the rate constant is known as equilibrium constant. Let’s find out more. Law of Chemical Equilibrium Representation of the attainment of chemical equilibrium is – The equilibrium constant is defined as the product of the molar concentration of the products which is each raised to the power equal to its stoichiometric coefficients divided by the product of the molar concentrations of the reactants, each raised to the power equal to its stoichiometric coefficients is constant at constant temperature. T his equilibrium constant can be simply expressed in terms of the partial pressures of the reactants and the products. However, if it is expressed in terms of the partial pressure, it is denoted by Kp. Browse more Topics under Equilibrium - Acids, Bases and Salts - Buffer Solutions - Equilibrium in Chemical Processes - Equilibrium in Physical Processes - Factors Affecting Equilibria - Ionization of Acids and Bases - Solubility Equilibria Equilibrium Constant Units and Formula Law of mass action also forms the basis which states that the rate of a chemical reaction is directly proportional to the product of the concentrations of the reactants raised to their respective stoichiometric coefficients. Therefore, given the reaction – aA(g) + bB(g) ⇔ cC(g) + dD(g) By using the law of mass action here, - The forward reaction rate would be k+ [A]a[B]b - The backward reaction rate would be k– [C]c[D]d where, [A], [B], [C] and [D] being the active masses and k+ and k− are rate constants of forward and backward reactions, also the a, b, c, d are the stoichiometric coefficients related to A, B, C and D respectively. However, at the equilibrium – the forward and the backward rates are equal, stating – Rate of forward reaction = Rate of backward reaction Kc is the equilibrium constant expressed in terms of the molar concentrations. The equation Kc = [ C ]c·[ D ]d / [ A ]a·[ B ]b or, Kc = Kf / Kb is the Law of Chemical Equilibrium. The equilibrium constant is therefore related to the standard Gibbs free energy change for the reaction which is stated by the equation – §Gº= -RT ln Keq where T states the temperature, R is the universal gas constant and Keq is the equilibrium constant. Solved Examples for You Question: Write the equilibrium constant expression for the reaction equation: NH3 + HOAc = NH4+ + OAc– [NH4+] [OAc–] ——————— = K (unitless constant) [NH3] [HOAc] Question: A closed container has N2O4 and NO2 gases in it. It has been placed in the lab for many days. What would you consider the container and the gases to be? - an open system - a closed system - not a system
Aug 16, 2022 GEOL 1200 - The Mobile Earth An examination of the earth’s dynamic systems including continental drift, sea floor spreading, mountain building, volcanic activity, and earthquakes, and their explanation in terms of plate tectonic theory. Intended for both science and nonscience majors seeking a nontechnical overview of plate-tectonics. Credit Hours: 3 OHIO BRICKS Pillar: Natural Sciences General Education Code (students who entered prior to Fall 2021-22): 2NS Repeat/Retake Information: May be retaken two times excluding withdrawals, but only last course taken counts. Lecture/Lab Hours: 3.0 lecture Grades: Eligible Grades: A-F,WP,WF,WN,FN,AU,I Course Transferability: OTM course: TMNS Natural Sciences College Credit Plus: Level 1 - Become familiar with the concept of paleomagnetism, its use in developing the hypothesis of sea-floor spreading and the evidence that substantiated this hypothesis. - Become familiar with the process of terrane accretion (as exemplified by the North American Cordillera) as a prelude to continental collision. - Know that mid-ocean ridges represent the surface manifestation of divergent plate boundaries and understand the processes occurring at these sites (as exemplified by the Mid-Atlantic Ridge, the east Pacific Rise and Iceland). - Know the geologic time scale and understand the concept of geologic time and the history behind our efforts to determine age of the Earth. - Learn the concept of continental drift, the evidence upon which it was based, and the reasons for its rejection by the contemporary scientific community. - Recognize the role of supercontinents, such as Pangea, in influencing the geologic, climatic and biological evolution of the earth. - Understand convergent plate boundaries (as exemplified by the Pacific Ring of Fire, Indonesia and the Mediterranean), the volcanoes associated with these boundaries, and there impact on society. - Understand the concept of hot spots on the earth’s surface (as exemplified by Hawaii and Yellowstone), their probable origin and their role in the break-up of continental land masses. - Understand the process of continental break-up through the development rift systems (as exemplified by the East African Rift Valley) and their opening to form oceans. - Understand the process of subduction and the recognition of its link of sea-floor spreading in formulating the theory of plate tectonics. - Understand the processes associated with continental collision (as exemplified by the Appalachians, Alps and Himalayas). - Understand transform faults such as the San Andreas, their role as conservative plate boundaries, the earthquakes they produce, and our effort to predict earthquakes and mitigate their effects. Add to Portfolio (opens a new window)
Malaysia, Indonesia and Thailand are also responsible for producing more than 90 per cent of global palm oil, which consequently causes 27 million tonnes of waste per annum from fruit bunches (EFBs), fibres, shells and liquid effluent. Agricultural Waste into Biopolymers? Article from | Teysha Technologies Southeast Asian regions are some of the biggest worldwide agriculture producers, as well as the main areas responsible for biomass wastes, such as agricultural residues, wood biomass, animal waste and municipal solid waste, while also producing, millions of tonnes of single use plastics. To tackle such waste head-on, second generation plastic substitute specialist Teysha Technologies has developed a patented chemistry platform that can be used to transform agriculture waste into a wide variety of sustainable packaging and construction materials. In the southern Asiatic regions, including Thailand, Indonesia, Philippines and Vietnam, more than 38 million tonnes of rice husk and 34 million tonnes of bagasse are produced every year. Malaysia, Indonesia and Thailand are also responsible for producing more than 90 per cent of global palm oil, which consequently causes 27 million tonnes of waste per annum from fruit bunches (EFBs), fibres, shells and liquid effluent. Although Southern regions in Asia are major worldwide agriculture producers, farming practices are sometimes antiquated and often environmentally harmful. Every year, thousands of tonnes of biomasses, including stems, leaves and seed pods, are destroyed, waste from crops are commonly left in the field to decompose or – much worse – burned, causing approximately 13 per cent of agricultural greenhouse gas emissions, according to a recent study in Science Direct. This technique of burning agriculture waste is often referred to as slash-and-burn, which involves large fires and can greatly contribute to mass deforestation. This practice causes air pollution as many toxic gasses, such as methane, nitrogen oxide and ammonia are released in the atmosphere. Breathing these gases can in turn pose more health risks, such as the respiratory illness asthma, chronic bronchitis as well as eye and skin diseases. In Indonesia, for example, agricultural businesses regularly use the slash and burn method to clear vegetation and waste from their land every year. Indonesia's national disaster agency counted more than 328,724 hectares of land burnt from January to August 2019, which caused the closure of schools and offices as the haze reached very unhealthy levels on the Air Pollutants Index (API). “Burning the agriculture waste destroys the quality of the soil. When crops are burnt, existing minerals and organic material are destroyed, such as the cellulose and the sugar from the trees, starch from tapioca, corn and wheat; and even coconut, palm, soy and rapeseed, all of which are potentially valuable natural resources” said Matthew Stone, Chairman at plastic substitute specialist Teysha Technologies. “Large multinational agricultural conglomerates often own and operate both upstream agricultural production and downstream manufacturing, packaging and distribution operations in Asia. Meaning, they are not only responsible for large scale agriculture waste, but also for the use of millions of tonnes of single use plastic every year in their product packaging, plastic which is polluting rivers and oceans at an unacceptable rate. “In a 2015 report, the non-profit organisation Ocean Conservancy noted that 55 to 60 percent of plastic waste entering the world’s oceans comes from just five countries, including four in the region: China, Indonesia, the Philippines, Thailand and Vietnam. The debris kills marine life and breaks down into microparticles that make their way into the food chain. China alone is responsible for producing 8.8 million metric tonnes of plastic waste that goes directly into our oceans. At this rate, according to the World Economic Forum report, there will be more plastic than fish in the ocean by 2050. “Instead of using traditional petrochemical plastics in their production and distribution networks, large agriculture conglomerates could embrace new biopolymer technology and profit from their vast waste streams, while saving the planet from incessant pollution. Waste from common crops like sugar bagasse, tapioca, corn and wheat contain cellulose and starch, two natural raw materials that our chemical engineers can use to create a second generation of bioplastics and help reduce waste in a sustainable way.” Teysha Technologies is an innovative UK company that has been making huge inroads into producing biopolymers from organic feedstocks. The versatile technology platform they have created is based on polymers derived entirely from natural feedstocks which can degrade in natural environments within a relatively short period of time. Most importantly, these polymers are a realistic substitute for existing petroleum-based polycarbonates. “Teysha’s technology is a plug-and-play system that use monomers and co-monomers — also known as the natural building blocks that make up plastic — to create a polymer that works and functions like normal plastic,” explains Stone, “Instead of using hydrocarbon-based petrochemicals sourced from fossil fuels, Teysha uses natural sources such as starches and agricultural waste products. “The positive aspect of this platform is that the physical, mechanical and chemical properties of the polymers can be tuned to make them usable in a wide variety of applications and materials.” The strength, toughness, durability and longevity of Teysha Technologies’ polymers can suit multiple different uses. That means it’s possible to create either rigid or flexible materials, or even different polymers with different thermal properties. Most importantly, it is possible to control the biodegradation of the polymers, which means their breakdown can be effectively scheduled for within weeks or years. At the point of degradation, they will break down back to their basic natural building blocks, which is beneficial to the environment. As the worldwide sustainably and plastic pollution policy changes rapidly, the Asian market needs new and effective strategies to make the best use of their abundant natural resources whilst protecting their rivers and oceans. Platforms like the ones created by Teysha can play a vital part in this sustainability mission. The content & opinions in this article are the author’s and do not necessarily represent the views of AgriTechTomorrow This post does not have any comments. Be the first to leave a comment below. Post A Comment You must be logged in before you can post a comment. Login now.
Bluetooth is a wireless communication protocol which is used to exchange information or data over short distances, with the help of fixed or mobile devices. Bluetooth wireless technology is integrated into these kinds of electronic devices to enable users to exchange information such as music, videos, and pictures wirelessly. Bluetooth technology was created in 1994 by engineers working at Ericsson, Sweden. The Bluetooth Special Interest Group was formed in 1998 by various companies to develop, maintain, and license Bluetooth standards. Bluetooth is implemented with the help of a radio technology termed as frequency-hopping spread spectrum. It slices the data into chunks which is transmitted on up to 79 frequencies. Bluetooth technology enables electronic devices to exchange data via a protected worldwide unlicensed Industrial, Scientific and Medical (ISM) 2.4 GHz short-range radio frequency bandwidth. Bluetooth devices communicate with each other with the aid of Bluetooth profiles. A Bluetooth profile is a wireless interface design made specifically for Bluetooth devices. The usage of profiles helps Bluetooth device manufacturers to work with each other without the concern of incompatibility or redundancy. How to Connect a Bluetooth Headset to a Computer In order to connect a Bluetooth headset to a computer or laptop, it should be Bluetooth enabled. Today most of the computers are designed with built-in Bluetooth functionality. If your computer is not Bluetooth enabled, you will have to install a Bluetooth adapter which plugs into a USB port. These Bluetooth adapters are packaged with specific drivers stored on a CD. To pair and connect a Bluetooth headset to a Bluetooth enabled computer, perform the following steps: - Click Start and then click the menu option Control Panel. - The Control Panel window appears. - Double-click the Bluetooth icon. - The Bluetooth Devices window appears. - Click the Tab Options, and select the option Turn discovery on. Enabling this option will permit the computer to detect Bluetooth devices within close range. Select the option Allow Bluetooth devices to connect to this computer. - Turn on your Bluetooth headset for it to be detected by the computer. - Open Bluetooth settings by typing bthprops.cpl via the Run command. - The Bluetooth Settings window appears. Click the button Add. - Select the checkbox My device is set up and ready to be found in the Add Bluetooth Device Wizard, and click Next. - The headset icon will appear in the wizard when it has been detected. - Select the headset and then click Next. Enter the passcode for your headset. - The computer and your Bluetooth headset are now paired. - The headset should now appear in Bluetooth Devices. - Double-click the My Bluetooth places icon on the desktop. - Right-click the headset icon and select the option Connect Headset. - Accept the connection on your headset after you hear a beep. Note: All Bluetooth devices have to be first paired with the target device and then connected.
A study of microbial populations under a prolonged period of starvation by Indiana University professor Jay T. Lennon and his laboratory could help researchers answer questions pertaining to chronic infections, the functioning of bacteria in the environment and the persistence of life itself. In a paper published online Aug. 12 by the Proceedings of the National Academy of Sciences of the United States of America, Lennon and his colleagues explain their study of about 100 populations of different bacteria in closed systems, which had no access to external food for 1,000 days. The team tracked how long they survived, and almost all of them persisted. "The larger question of how bacteria survive long periods of energy limitation is relevant to understanding chronic infections in humans and other hosts, and is related to how some pathogens tolerate drugs like antibiotics," said Lennon, a professor in the Department of Biology in the College of Arts and Sciences. Many bacterial infections are difficult to treat, in part, because drugs are often designed to target the cellular machinery of metabolically active cells. Energy-limited bacteria often enter a quiescent, or dormant, state that makes them less sensitive to drug treatments, Lennon said. Not only can the pathogens persist under such conditions, the populations can also evolve antibiotic resistance, making the problem worse. Microbes also play an important role in the environment. The bacteria in the study came from agricultural soils. In those habitats, Lennon said, microbes form symbiotic relationships with plants, and they carry out processes that are essential for the functioning of ecosystems, such as carbon sequestration, nutrient cycling and greenhouse gas emissions. A major and unresolved question is how billions of microbial cells and thousands of microbial taxa coexist in a single gram of soil, often under harsh environmental conditions. One explanation supported by the research is that microbes seem to be well-adapted to feast-or-famine conditions, where resources can be in short supply for extended periods. This may help explain how complex microbial communities are maintained over time. In the study, Lennon and his colleagues estimated that bacteria, which are the fastest-reproducing organisms on the planet, can also be extremely long-lived. Lennon and his team, including former Indiana University doctoral student William Shoemaker, estimated that energy-limited bacteria can have lifespans that rival, and in some cases exceed, those of plants and animals. The study used survival analyses to estimate that some populations have extinction times of up to 100,000 years. "Obviously, these predictions extend far beyond what can be measured," Lennon said, "but the numbers are consistent with the ages of viable bacteria that have been recovered from ancient materials, such as amber, halite crystals, permafrost and sediments at the bottom of the deepest oceans." The persistence of microbes under such conditions likely involves dormancy and other mechanisms that conserve energy. For example, Lennon and colleagues found that the survival of cells in their closed system was sustained by the ability of bacteria to "scavenge" their dead relatives. Under these lean conditions, where cells must eke out a living on vanishingly small quantities of food, Lennon and his team were curious about the potential for bacteria to evolve. They identified genes that were under negative selection, but also signatures of positive selection, which indicate cryptic growth that allowed new mutations to increase in frequency. This finding suggests that the recycling of dead cells has the potential to fuel adaptive evolution. Such observations are relevant for understanding the constraints on fundamental biological processes given that large swaths of the planet are energy limited. Shoemaker, W.R., et al. (2021) Microbial population dynamics and evolutionary outcomes under extreme energy limitation. Proceedings of the National Academy of Sciences. doi.org/10.1073/pnas.2101691118. Posted in: Microbiology Tags: Antibiotic, Antibiotic Resistance, Bacteria, Chronic, Cycling, Drugs, Evolution, Food, Frequency, Genes, Laboratory, Research Source: Read Full Article
A group of polymers across several members of the oldest meteorite class, the CV3 type, shed light on space chemistry as early as 12.5 billion years ago. Many meteorites, which are small pieces from asteroids, do not experience high temperatures at any point in their existence. Because of this, these meteorites provide a good record of complex chemistry present when or before our solar system was formed 4.57 billion years ago. For this reason, researchers have examined individual amino acids in meteorites, which come in a rich variety and many of which are not in present-day organisms. In Physics of Fluids, by AIP Publishing, researchers from Harvard University show the existence of a systematic group of amino acid polymers across several members of the oldest meteorite class, the CV3 type. The polymers form organized structures, including crystalline nanotubes and a space-filling lattice of regular diamond symmetry with density estimated to be 30 times less than water. “Because the elements required to form our polymers were present as early as 12.5 billion years ago, and there appears to be a gas phase route to their formation, it is possible that this chemistry was and is present throughout the universe,” said author Julie McGeoch. Preventing terrestrial contamination was a top priority for the researchers. They devised a clean room method using a clean stepper motor with vacuum-brazed diamond bits to drive several millimeters into the meteorite sample before retrieving newly etched material from only the bottom of the hole. Several drill bits were used in a single etch, all being cleaned with ultrasonification. The resulting micron scale meteorite particles were then placed in tubes and stored at minus 16 degrees Celsius. Polymers were induced to diffuse out of the micron particles via Folch extraction, which involves two chemical phases related to different solvents with different densities. Mass spectrometry revealed the existence of the polymers, which were composed of chains of glycine, the simplest amino acid, with additional oxygen and iron. They had a very high deuterium-to-hydrogen-isotope ratio that confirmed their extraterrestrial origin. This research was inspired by observations on a small, highly conserved biological protein that entrapped water. That finding suggested if such a molecule could form in gas phase space, it would aid early chemistry by supplying bulk water. The researchers employed quantum chemistry to show amino acids should be able to polymerize in space within molecular clouds, retaining water of polymerization. Many experiments followed using meteorites as the source of polymer culminating in 3D structures. Going forward, the researchers hope to get more detail of the glycine rods via continued X-ray analysis. Other polymers in the same class remain to be characterized and could reveal the energetics of polymer formation. Reference: “Structural organization of space polymers” by Julie E. M. McGeoch and Malcolm W. McGeoch, 29 June 2021, Physics of Fluids.
When we think about all of the essential ingredients necessary to support life, enzymes are often biology’s unsung heroes. These molecules are responsible for enabling and accelerating the chemical reactions that take place in cells. Among the many fascinating facts about enzymes is that these molecules are highly specialized. In general, enzymes play a wide range of functions including fueling the metabolism of organisms yet, each type of enzyme has a dedicated role in driving a specific reaction. While enzymes have literally been with us since the genesis of life, they represent a relatively new advancement as an agricultural input capable of positively impacting plant health, nutrition and yield. Despite the broad benefits enzymes can bring to agriculture, historically the nature of agricultural inputs have limited an enzyme’s ability to thrive and function. VersaShield® from Elemental Enzymes is a patented process that enables enzymes, peptides and proteins to be more stable which, in turn, has enabled performance and consistency on par or better than alternative more traditional hard chemistry inputs. Through extensive testing, Elemental Enzymes has produced and stabilized enzymes to catalyze reactions for, or near, a plant enabling plants to more naturally fight disease and deliver greater yield. These yield generating enzymes work in the soil providing more nutrients to the plant. Because enzymes are biologicals, they provide a safe targeted solution for farmers. Another strong benefit for enzyme-based agricultural inputs is the fact enzymes play well with others including fertilizers, herbicides and insecticides. There are literally thousands of enzymes harnessing the power of catalyzing more biological reactions for the plant – and along with this are countless opportunities to solve many of our planet’s toughest challenges for producing more food with fewer resources and deploying more effective and sustainable solutions to mitigate climate change.
Anatomy of the Patella The patella is a small piece of bone in front of the knee that slides up and down the groove in the femur bone during bending and stretching movements. The ligaments on the inner and outer sides of the patella hold it in the femoral groove and avoid dislocation of the patella from the groove. What is Patellar Instability? Any damage to the supporting ligaments may cause the patella to slip out of the groove either partially (subluxation) or completely (dislocation). This misalignment can damage the underlying soft structures such as muscles and ligaments that hold the kneecap in place. Once damaged, these soft structures are unable to keep the patella (kneecap) in position. Repeated subluxation or dislocation makes the knee unstable. This condition is called knee instability. Patellar (kneecap) instability results from one or more complete or partial dislocations (subluxations). Symptoms of Patellar Instability The signs and symptoms of patellar instability include: - Pain, especially when standing up from a sitting position - Feeling of unsteadiness or tendency of the knee to give way or buckle - Recurrent subluxation - Recurrent dislocation - Severe pain, swelling and bruising of the knee immediately following subluxation or dislocation - Visible deformity and loss of function of the knee, which often occurs after subluxation or dislocation - Changes in sensations such as numbness or even partial paralysis, which can occur below the dislocation because of pressure on the nerves and blood vessels Causes of Patellar Instability Various factors and conditions may cause patellar instability. Often a combination of factors can cause this abnormal tracking and include the following: - Anatomical defect: Flat feet or fallen arches and congenital abnormalities in the shape of the patella bone can cause misalignment of the knee joint. - Abnormal “Q” Angle: The “Q” angle is a medical term used to describe the angle between the hips and knees. The higher the “Q” angle, such as in knock knees, the more the quadriceps pull on the patella, causing misalignment. - Patellofemoral Arthritis: Patellar misalignment causes uneven wear and tear and can eventually lead to arthritic changes in the joint. - Improper Muscle Balance: Quadriceps, the anterior thigh muscles, function to help hold the kneecap in place during movement. Weak thigh muscles can lead to abnormal tracking of the patella, causing it to subluxate or dislocate. Diagnosis of Patellar Instability Your surgeon diagnoses the condition by collecting your medical history and physical findings. He may also order certain tests such as X-rays, MRI or CT scans to confirm the diagnosis. Treatment of Patellar Instability The treatment for instability depends on the severity of the condition and is based on diagnostic reports. Initially, your surgeon may recommend conservative treatments such as physical therapy, use of braces and orthotics. Pain relieving medications may be prescribed for symptomatic relief. However, when these conservative treatments yield unsatisfactory response, surgical correction may be recommended. Considering the type and severity of the injury, your surgeon decides on the surgical correction. A lateral retinacular release may be performed, where your surgeon releases or cuts the tight ligaments on the lateral side (outside) of the patella, enabling it to slide more easily in the femoral groove. Your surgeon may also perform a procedure to realign the quadriceps mechanism by tightening the tendons on the inside or medial side of the knee. If the misalignment is severe, tibial tubercle transfer (TTT) will be performed. This procedure involves removal of a section of bone where the patellar tendon attaches on the tibia. The bony section is then shifted and properly realigned with the patella and reattached to the tibia with two screws. Following the surgery, a rehabilitation program may be recommended for better outcomes and quicker recovery.
Tracking down the origin of cholera pandemics A new bacterial strain replaced older strains during the seventh cholera pandemic The bacterium Vibrio cholerae is the causative agent of the diarrheal disease cholera and is responsible for seven known pandemics. The seventh cholera pandemic began in 1961 and is still active. Unlike previous pandemics, it is caused by cholera strains of a slightly different type. How did the modified cholera strains develop and spread, and what might have contributed to their success? Scientists from the Max Planck Institute for Evolutionary Biology in Plön, Germany, and CAU Kiel, in an international team with colleagues from City College New York and the University of Texas Rio Grande Valley, have now gained new insights into a molecular mechanism that provides insight into the interactions between cholera bacteria and may have played a role in the emergence of the seventh pandemic. In their natural environment, bacteria are subject to competition with other bacteria for space and nutrients. In this process, molecular mechanisms help them to hold their own. One such mechanism is the so-called "type 6 secretion system" (T6SS), with which a bacterium transports toxic proteins into a neighboring bacterium and thereby kills it. Thus, cholera bacteria of the seventh pandemic use their T6SS to keep other bacteria in check and presumably more easily cause infection. Researchers now had the special opportunity to study the T6SS of cholera bacteria from previous pandemics. For this purpose, among other things, the T6SS genome sequence of cholera bacteria from the 2nd pandemic was reconstructed from a museum specimen from the 19th century in a complex procedure and recreated in the laboratory. In the process, the scientists were able to show that 2nd and 6th pandemic cholera bacteria lack a functional T6SS. As a result, the bacteria of earlier pandemics not only lack the ability to attack other bacteria, they are themselves killed by bacterial strains of the seventh pandemic. This may have been one of the reasons that older cholera strains were displaced by modified cholera strains of the seventh pandemic and are now hard to find. Data from new lab Daniel Unterweger, one of the study's authors and a group leader at the Max Planck Institute in Plön, Germany, says: "With these findings, we support the theory that microbial competition between bacteria is very important for understanding pathogens and bacterial pandemics. Our research on the cholera bacterium was made possible by an S2 laboratory newly established at the institute. Here, we can conduct experiments with bacterial pathogens under the necessary safety precautions. The study contains some of the first data from the new laboratory."
Page 1 of 4 Please PAUSE the "How to Draw a Kangaroo" video after each step to draw at your own pace. For the first few steps, don't press down too hard with your pencil. Use light, smooth strokes to begin. Step 1: Draw an oval as a guide for the lower part of the kangaroo's body. It doesn't have to be a perfect shape. It's just a guide. Step 2: Draw a small circle on the upper right side as a guide for the upper part of the kangaroo's body. The circle should be around half the size of the original oval, and their outer edges should meet. Step 3: Draw an even smaller circle on the upper right side of that. Pay attention to the positions of the circles and their different sizes. Step 4: Draw a small arc on the right side of the head as a guide for the kangaroo's muzzle. Step 5: Draw a longer, thinner arc on top of the head as a guide for the kangaroo's ears.
Our team has discovered a single-molecule switch that can act like a transistor and store binary information such as the 1s and 0s used in classical computing. The molecule is around five square nanometers in size — more than one billion of them would fit onto the cross-section of a human hair. Based on our experiments, molecules like the ones they have discovered could offer information densities of around 250 terabits per square inch, which is around 100 times the storage density of current hard drives. In the study, molecules of an organic salt can be switched using a small electrical input to appear either bright or dark, providing binary information (see image). This information can be written, read, and erased at room temperature and under normal air pressures. These are important characteristics for practical application of the molecules in computer storage devices. Most previous research into molecular electronics for similar applications has been conducted in vacuum and at very low temperatures. There are several properties that a molecule must possess to be useful as a molecular memory. Apart from being switchable in both directions under ambient conditions, it must be stable for a long time in the bright and dark state and spontaneously form highly ordered layers that are only one molecule thick in a process called self-assembly. To our knowledge, ours is the first example that combines all these features in the same molecule. In laboratory experiments, our team used small electric pulses in a scanning tunneling microscope to switch individual molecules from bright to dark. We were also able to read and erase the information afterward at the press of a button. During the switching, the electric pulse changes the way the cation and the anion in the organic salt are stacked together and this stacking causes the molecule to appear either bright or dark. Apart from the switching itself, the spontaneous ordering of the molecules is crucial — through self-assembly, they find their way into a highly ordered structure (a two-dimensional crystal) without the need for expensive manufacturing tools, as is the case in currently used electronics. Also, the smart molecules themselves are prepared using standard synthetic chemistry protocols, which allows to make them in astronomic numbers and with atomic precision at low cost, something that is hard to imagine for any top-down nanoscale object. Angewandte Chemie Int. Ed. 2020, 59, 14049–14053, doi.org/10.1002/anie.202004016
Through learning reflection activities, positive and negative information is obtained about the learning activities that have been carried out, as well as how teachers can improve the quality of the learning. The results of learning reflection can also be used as observation material to find out how far the achievement of learning activities and can provide satisfaction for students. Learning reflection is carried out by teachers and students so that teachers and students can also feel the benefits of this activity. For teachers learning reflection is useful for reviewing a group or class to describe the situation or condition of a class, and teachers can find out the potential of each individual and student in the class. That way, teachers can improve continuous and tiered evaluation activities. While the benefits of learning reflection for students are to channel the expression of the learning process that has been carried out, whether it is good or still lacking. This can train students’ self-confidence to express opinions, as well as improve learning activities according to the interests and methods they want. One of the most common examples of learning reflection is to review learning material that has been studied previously. This activity is usually carried out independently by reading from the beginning all related notes after understanding and taking note of the important points. After you have finished taking notes, you can continue to explain yourself without looking at the book in front of the mirror or someone else to help listen. This method is considered capable of making students not only memorize, but also understand and can explain the material. The thing that is no less important in the field of learning besides intelligence is shaping the character of students. It is very important to reflect on learning in terms of benefits after studying a material obtained by students. this indirectly teaches students to think critically, out of the box, and create a quality mindset. Many children grow up smart but are only able to work on questions or material similar to what has been taught. When given a little modification, they tend to be confused and can’t even finish. Reflection on learning for the next, namely planning learning behavior in the days ahead is useful for changing bad habits in learning. Previously, it was necessary to know the mistakes and what could hinder students from learning. There must be a willingness and intention of students to change these bad qualities. To reduce the intensity of bad behavior, it must be replaced in a positive way. Later this will have an impact on the quality of learning in the future. Of course, each student has a different level of understanding. That’s why a method that is applied in schools and then used to study many students is considered less effective. By using the same learning method for all students, there will be students who can be left behind because they are not able to follow the learning method. This is the importance of reflecting on each student to see if there are difficulties in implementing the learning method used. This reflection is done and written by the teacher concerned. By writing a journal, it can be used as a teacher evaluation material at the end of the semester which will later help to improve learning outcomes. This journal can be used to analyze the learning process and the plans that will be taken to overcome the problem. Those are some things about learning reflection starting from the benefits, objectives and examples. It is very important to reflect on learning in order to create better learning activities in the future.
Gastrostomy Tube (G-Tube) What Is a G-Tube? Some kids have medical problems that make it hard for them to get enough nutrition by mouth. A gastrostomy tube (also called a G-tube) is a tube inserted through the belly that brings nutrition directly to the stomach. It's one of the ways doctors can make sure kids who have trouble eating get the fluid and calories they need. A surgeon puts in a G-tube during a short procedure called a gastrostomy. The G-tube can stay in place for as long as a child needs it. Kids who have had a gastrostomy (ga-STROSS-teh-mee) can get back to their normal activities fairly quickly after they have healed. Who Needs a G-Tube? Kids need G-tubes for different kinds of health problems, including: - congenital (present at birth) problems of the mouth, esophagus, stomach, or intestines - sucking and swallowing disorders (due to premature birth, injury, a developmental delay, or another condition) - failure to thrive (when a child can't gain weight and grow normally) - extreme problems with taking medicines What Happens Before G-Tube Placement? Doctors often order several tests before a child can get a G-tube. The most common test is an X-ray of the upper gastrointestinal (GI) system. This lets the doctor see the upper part of the digestive system. Sometimes the surgeon asks the family to meet with specialists, such as a gastroenterologist, dietitian, or social worker. This is to prepare a care plan so everything will be set up when the child goes home with the G-tube. To get ready for the procedure, you will need to carefully follow instructions about when your child must stop eating and drinking. When you get to the hospital, the doctor will describe what will happen and answer any questions. The anesthesiology team will ask about your child's medical history and when your child last ate and drank. Before the procedure begins, the care team sets up monitors to keep track of your child's vital signs (like blood pressure and oxygen level) and puts in an intravenous line (IV) to give medicines and anesthesia. Your child will go to the operating room, and you'll go to a waiting area. A hospital staff member will tell you when the procedure is over. What Happens During G-Tube Placement? There are three ways doctors can insert a G-tube. Sometimes a combination of methods is used. - The laparoscopic technique is done by making two small incisions (cuts) in the belly. One is for inserting the G-tube, and the other is where the surgeon inserts a tiny telescope called a laparoscope. The laparoscope helps the surgeon see the stomach and other organs and guide the G-tube into place. - Open surgery is done with larger incisions. Surgeons choose this method to guide the G-tube into place when other methods are not a good choice — for example, if there is scar tissue from a past surgery or if the child needs another surgery done at the same time. - The PEG procedure stands for percutaneous (through the skin) endoscopic gastrostomy. The surgeon inserts an endoscope (a thin, flexible tube with a tiny camera and light at the tip) through the mouth and into the stomach to guide the G-tube into place. How Long Does G-Tube Placement Take? Putting in a G-tube takes only about 30 to 45 minutes. What Happens After G-Tube Placement? Kids usually stay in the hospital for 1 or 2 days. Most hospitals let a parent stay with their child. While in the hospital, your child will get pain medicine as needed. The nurses will teach you how to: - Care for the tube and the skin around it to keep it clean and infection-free. - Handle potential problems, such as the tube accidentally falling out. - Give a feeding through the tube. You will also learn what to feed. - Help your child eat independently, if the doctor says it's OK. By the time your child is ready to go home, you should have: - detailed instructions on home care, including bathing, dressing, physical activity, giving medicines through the tube, and venting (releasing gas from) the tube - a visit scheduled with a home health care nurse to make sure things are going smoothly - follow-up visits scheduled with your doctor to check the tube and your child's weight Are There Any Risks From G-Tube Placement? All surgeries come with some risks. The surgical team will discuss them with you before the procedure and do everything possible to minimize them. If you have any concerns, be sure to bring them up before the procedure. Complications of surgery can include: - extra tissue (granulation tissue) forming at the tube site - problems from anesthesia - an allergic reaction Granulation tissue or leaking can usually be fixed by caring for the wound as instructed or changing the feeding schedule. Sometimes surgery is needed to fix a problem at the surgery site. How Can Parents Help After G-Tube Placement? It's normal to feel a little bit nervous about the G-tube at first, but it's important that you feel comfortable taking care of it. Here are some tips: - Always wash your hands well before caring for the G-tube. - Always keep the feeding set tubing out of the way of infants and children. There is a risk that the feeding set tubing can get wrapped around a child’s neck, which could lead to strangulation or death. - Know what to expect as the G-tube heals. Talk to your child's care team if you have questions. - Get support from other parents. It can help to connect with other parents whose kids have G-tubes. Ask your child's doctor about a support group, or look online. - Talk with a social worker. Some kids with a G-tube worry about how the tube looks and how others might react. If your child is concerned, ask your care team to recommend a social worker who can help. When Should I Call the Doctor? Call your doctor if your child has any of these problems: - a dislodged tube - a blocked tube - any signs of infection (including redness, swelling, or warmth at the tube site; discharge that's yellow, green, or foul-smelling; fever) - excessive bleeding or drainage from the tube site - severe belly pain - vomiting or diarrhea that keeps happening - trouble passing gas or having a bowel movement - pink-red tissue coming out from around the tube Most problems can be treated quickly when they're found early.
THE MINERAL BLODITE - Chemistry: Na2Mg(SO4)2 - 4H2O, Hydrated Sodium Magnesium Sulfate. - Class: Sulfates - Uses: Only as mineral specimens. Blodite, which is also spelled bloedite , forms in marine and non-marine (lacustrine) evaporite deposits. Evaporite minerals are geologically important because they clearly are related to the environmental conditions that existed at the time of their deposition, namely arid. They also can be easily recrystallized in laboratories enabling sedimentologists to obtain their specific characteristics of formation, such as temperature, solution concentrations, etc. Blodite also forms as an efflorescence on cave and mine walls. mineral is one that forms literally out of thin air, as a "precipitate" of sorts from fumes concentrated with the mineral's chemical makeup. Crystals of blodite are scarce, but well formed crystals can show an intricate, multi-facetted, monoclinic form. Specimens of blodite should be stored in a sealed container as they can dry out and crumble. - Color is white, colorless, gray, yellow. red, green or blue-green. - Luster is vitreous. - Transparency: Specimens are translucent to transparent. - Crystal System is monoclinic; 2/m. - Crystal Habits include granular, earthy and encrusting masses. Individual intricate, multi-facetted, prismatic crystals are uncommon. - Hardness is 2.5 - 3. - Specific Gravity is approximately 2.2 - 2.3 (light for translucent minerals). - Streak is white. - Other Characteristics: Salty taste. - Associated Minerals include and other more rare evaporite minerals. - Notable Occurrences include the type locality of Chuquicamata, Antofagasta, Atacama Desert, Chile as well as Soda Lake and other California sites and Coconino, Arizona, USA; Germany; Russia; Austria; Poland and India. - Best Field Indicators are associations, density, crystal habit, taste and environment of formation.
History at Woodvale Primary Academy At Woodvale, we pride ourselves on making our history curriculum fit our children and our community. We understand that we have a diverse community with a diverse background and aim to use as much of the knowledge and national pride that comes with that. We underpin all our learning with the National Curriculum objectives for each Key Stage but have a tailored curriculum that guides the children through time, making links to the local area, it’s people and also to their own home countries and languages. We try to make the children aspirational by telling them about famous places and people from Northamptonshire, instilling a sense of pride in the area and also aspirations that they too could become a scientist or athlete who influences people in the future. We have so much history in our local area, from the earliest times of Stone Age settlements, through Roman villas, Tudor and Victorian houses and on to more recent history such as the evacuees of World War Two. We aim to harness this local element, making trips out or inviting visitors in to make history come alive for our children. In Key Stage One, history is based around childhood days, school and influential people from the past. Key Stage 2 go back to the earliest times, starting at the Stone Age before winding their way through the year groups and the years to the relative modern history of World War Two, making many interesting stops along the way. Projects are designed to make links between curriculum areas, for example studying famous artists of the past in art or famous scientists through a science project. When they leave Woodvale, we want our children to understand that history is something that they are involved in and making, not something that goes on around them.
Communication disorders and special educational needs. Attention Deficit Hyperactivity Disorder (ADHD) is a condition that includes lack of attention, hyperactivity and impulsiveness. A person with autism has difficulty with social communication, social interaction and social imagination. People with communication disorders have problems with speech, language or hearing that make it harder for a person to learn or interact with others. A learning difficulty is a clear difficulty with reading, writing or maths. For example dyslexia, dyspraxia and dyscalculia. A learning disability means someone who has a low IQ (less than 70) and significant difficulty with everyday tasks. A person has special educational needs (SEN) if they have learning difficulties or disabilities that make it harder for them to learn than most other children of about the same age.
How ocean water vapor may be an answer to a climate change issue DWANE BROWN, HOST: Of all the water on Earth, only about 2 1/2 percent is fresh water, and it's also vanishing fast due to climate change. LEILA FADEL, HOST: But researchers at the University of Illinois at Urbana-Champaign say climate change is also creating fresh water in the form of ocean vapor. PRAVEEN KUMAR: And if we could tap into that resource, we could supply fresh water without the need to desalinate. BROWN: Praveen Kumar is a professor who specializes in climate-driven changes in the water cycle. Kumar says existing methods to meet freshwater demands, like seeding clouds to make rain or removing salt from seawater, are inadequate and unsustainable. FADEL: So as global temperatures keep rising, his research team set out to find a long-term solution. KUMAR: Warmer air holds more moisture. We're also looking at warming of the ocean's surfaces. And as a result, evaporation will increase. So essentially, more evaporation and more moisture in the air and, therefore, more water. BROWN: Now, the study focused on 14 water-stressed cities around the world. The objective - to see whether it would be feasible to capture ocean vapor and turn it into fresh water. KUMAR: What we envision for this work is a capture surface. So if you think about putting something, say, in the ocean west of Los Angeles, with about 9 to 10 such structures meeting the entire drinking needs of the Los Angeles population. FADEL: The researchers say what they need next is some kind of apparatus to make this happen. KUMAR: It is now feasible to approach it from an infrastructure and a large-scale investment perspective and solve the problem. BROWN: Kumar says capturing moisture from over the oceans could provide a sustainable fresh water supply and solve one of the planet's great challenges. Transcript provided by NPR, Copyright NPR.
Knots, bends, and hitches are made from three fundamental elements: a bight, a loop, and a round turn. Observe figure 4-8 closely and you should experience no difficulty in making these three elements. Note that the free or working end of a line is known as the RUNNING END. The remainder of the line is called the STANDING PART. NOTE: A good knot is one that is tied rapidly, holds fast when pulled tight, and is untied easily. In addition to the knots, bends, and hitches described in the following paragraphs, you may have need of others in steelworking. When you understand how to make those covered in this chapter, you should find it fairly easy to learn the procedure for other types. The OVERHAND KNOT is considered the simplest of all knots to make. To tie this knot, pass the hose end of a line over the standing part and through the loop that has been formed. Figure 4-9 shows you what it looks like. The overhand knot is often used as a part of another knot. At times, it may also be used to keep the end of a line from untwisting or to form a knob at the end of a line. Figure 4-8.Elements of knots, bends, and hitches
A portion of a physical disk that functions like a completely separate physical disk. Partitions allow physical disks to function as multiple separate storage units for isolating operating systems from applications data on a single-boot system or for isolating operating systems from one another on a multiboot system. Disks can have two types of partitions: You can create partitions by using the fdisk command in MS-DOS and all versions of Microsoft Windows, by using Disk Administrator in Windows NT, or by using the Disk Management tool in Windows 2000. Using the fdisk command, you can create one primary partition and one extended partition. Disk Administrator can create up to four primary partitions or three primary and one extended partition. Disk Management can create partitions only on basic disks, not on dynamic disks.
The global impacts of food waste How much food is wasted? The global volume of food wasted per year is estimated to be 1.3 Gtons. This can be compared to the total agricultural production (for food and non-food uses such as textile fibers, energy crops of medicinal plants), which is about 6 Gtonnes. According to Practice Greenhealth's Sustainability Benchmark Report, hospitals generate over 29 pounds of waste per staffed bed per day; about one-third of healthcare's waste is comprised of food. Where and how does food waste mostly occur? Waste happens at all steps of production, handling, storage, processing, distribution and consumption. Agricultural production is responsible for the greatest amount of total food waste volumes, 33% of the total. Waste occurring at the consumption level is much more variable, with waste in middle and high-income regions at 31–39%, but much lower in low-income regions, at 4–16%. What is the impact of food waste on greenhouse gas emissions and climate? Without accounting for GHG emissions from land use change, the carbon footprint of food produced and not eaten is estimated to 3.3 Gtons of CO2-equivalent. For a sense of scale, when considering the total emissions by country, only the USA and China are responsible for more emissions. What is the water footprint related to food waste? Globally, the consumption of surface and groundwater resources of food waste (the so called blue water footprint) is about 250 km³, which is equivalent to 3.6 times consumption of the USA for the same period. Animal products in general, have a larger water footprint per ton of product than crops. This is one of the reasons why it is more efficient to obtain calories, protein and fat through crop products than through animal products. What is the impact of food waste on land use? At world level, in 2007, the total amount of food waste represented the consumption of 1.4 billion hectares of land, equal to about 30 % of the world’s agricultural land area, and larger than the surface of Canada. Low-income regions account for about two-thirds of this total. The major contributors to land occupation are meat and dairy products, with 78% of the total, whereas their contribution to total food waste is 11%. Land degradation is also an important factor of food waste. Most of the food waste at the agricultural production stage is in regions where land degradation is already present or where the soil is already in poor shape, thus adding undue pressure on the land. What is the impact of food waste on biodiversity? Agricultural production, in particular food crops, is responsible for 66% of threats to species in terrestrial systems. In the case of marine biodiversity, countries are “fishing down the food chain,” with fish catches increasingly consisting of smaller fish that are lower in the food chain, and at a higher rate than the ability of the fish stocks to renew. Any waste depletes the resources even faster. What is the economic impact of food waste? On a global scale, about USD 750 billion worth of food was wasted in 2007, the equivalent of the GDP of Turkey or Switzerland. This value a low estimate since it mainly considers producer prices and not the value of the end product. Source: Global Food Wastage – Causes and impact on natural resources, GreenFacts.org.
Endocardial Cushion Defect (also called atrioventricular (AV) canal or septal defects) Endocardial cushion defects are congenital heart conditions that occur early in fetal life due to improperly developed heart tissue in the center of the heart (the endocardial cushion area of the heart). This results in a range of defects that are included in this category of endocardial cushion defects. These include conditions such as atrioventricular canal (called AV canal), atrial septal defects (ASDs), ventricular septal defects (VSDs), and conditions involving the valves within the heart (the AV valves). There are 2 general categories of endocardial cushion defects: the complete form of endocardial defect (involving atria, ventricles and valves), and the partial form of endocardial defect (typically involving the atria only). These conditions have unique characteristics, and each child's heart structure is quite unique within this category of disorders. In addition to holes between the chambers of the heart, the valves may be improperly placed or the leaflets of the valves may not be completely formed for good closing function. The symptoms will vary greatly, depending on the size and location of the defects. Partial defects may not be discovered until later in life because they cause few symptoms. The complete form of the endocardial defects can severely affect a baby's health. Because the left side of the heart is the stronger pump, blood that has received oxygen typically passes through the holes between the chambers (called left-to-right shunt), overloading the right side of the heart. The problems with valve closure can also overload the heart. Individuals with congenital heart defects will typically require antibiotics when they have dental work because the bacteria in the mouth can circulate through the blood and cause infection in the heart structures (endocarditis). They will need to be followed long-term by cardiologists to be sure that any complications or new conditions are quickly detected and treated. For more information about Atrioventricular septal defect (endocardial cushion defect), including resources for parents and general information about congenital cardiac conditions, visit the following websites:
Ensuring access to safe, sufficient, nutritious, and sustainably grown food under a changing climate is a challenge for decisionmakers in Africa. Though adapting to climate change is only one of the issues that will be addressed by Rio+20 negotiators in June, it is a crucial issue facing Africa. It is anticipated that extreme weather events will disrupt the availability of and access to food by the continent’s most vulnerable population groups. IFPRI researchers contribute evidence-based solutions to help policymakers design effective strategies for meeting the food security needs of these groups. How can Africa increase its agriculture productivity in the context of a changing climate? Climate change is expected to result in a decline in many crop yields, according to a few climate scenarios run by IFPRI researchers. For example, a farm-level analysis in Tunisia predicts that under climate change, land productivity would fall by 15 to 20 percent in the short term and 35 to 55 percent in the long run. A large-scale analysis for Sub-Saharan Africa finds that cereal production growth will decline by 5 percent and that foods such as wheat, sweet potato, and cassava are vulnerable to climate change. To adapt to these changes, IFPRI research shows that farmers will likely expand their cultivation area and switch from more vulnerable crops such as hard wheat, fava beans, and chick peas to hardier crops such as soft wheat and barley. Increasing irrigation and fertilizer use and sowing at different dates may also mitigate their loss of productivity in the short run. However, increasing productivity over the long term will require mitigation efforts on a global scale. According to IFPRI analysts, short run challenges are manageable under properly targeted investments. The long run impact, however, is more difficult to address and will require mitigation efforts at the global scale. How can communities become more resilient to climate-induced global price shocks? Over the next few decades, climate change is expected to reduce the global food supply, while growing incomes and population will increase demand. In addition, major food products will become more expensive: prices for rice, wheat, and maize are projected to increase by 48, 36, and 34 percent, respectively, by 2050. As a net importer of cereals, Africa is expected to be severely hit by these price increases. Trade restriction policies, such as those implemented by some export countries during the 2007-08 and 2010-2011 food crises, would exacerbate this effect. Implementing a global regulation platform would discourage such trade restriction practices. How can farmers manage their land and water use already under pressure from increasing global demand? The world’s cultivated area is predicted to fall by 0.7 percent by 2050, according to a moderate climate change scenario run by IFPRI. In Sub-Saharan Africa, both rainfed and irrigated areas will decrease, putting pressure on Africa’s land and water use. Although Africa has the potential to expand its cultivated land, IFPRI researchers note that increasing productivity would have a higher payoff. Investment programs that focus on productivity increases are needed to counteract the adverse impacts of climate change. In order for farmers in Africa to be more meaningful contributors to a green economy, they must overcome the hurdle of adapting to climate change.
When we think of Alexander the Great, we think of an outstanding war hero. When we think of Napoleon Bonaparte, we think, again, of an outstanding war hero. If a random person were asked who either of these rulers was, their first response would be a fact about war. Alexander and Napoleon share similarities in their warfare, and how they used it to conquer and establish new lands. Alexander the Great’s strong perseverance and incredible battle strategies led to increase his power over his empire. Napoleon used his intelligence and skill of manipulation to earn respect and support from the French people, which gained him great power. Both men had similar qualities attaining leadership but their strategies to reach this were very different. Alexander the Great was King of Macedon, a state located in Northern Greece. Aristotle tutored him until the age of 16, and by the age of 30 he had created one of the largest empires in the ancient world. As he was undefeated in battle, Alexander is considered as one of history’s most successful military commanders and his battles and strategies are still taught at military schools worldwide. Alexander III of Macedon, commonly known as Alexander the Great, was born on a bright July day in 356 B.C. and died in June of 323 B.C. During his lifetime he was: King of Macedonia (336-323), Pharaoh of Egypt (332-323), King of Persia (330-323), and the King of Asia (331-323). From reading that alone, it is known that he was a conqueror and successful ruler. Alexander was the son of his predecessor Phillip II who passed away in 336 B.C. leaving the throne, a strong kingdom, and a very experienced army to Alexander. Alexander was awarded to be general of Greece and went on to complete his father’s military expansion plans. With this set up now, King Alexander wasted no time. In 334 B.C. he invaded the Persian-ruled Asia Minor and began a campaign lasting roughly ten years. During this campaign, specifically the battles of... Please join StudyMode to read the full document
Micronutrient malnutrition, mainly iron, Vitamin A, and zinc, affects more than 2 billion people worldwide. One notable cause of this condition is an abnormal digestive function due to parasitic intestinal infections. Since infections with soil-transmitted helminths (STH) are observed in 24% of the world’s population, it contributes to approximately 100 million malnutrition in children globally. The following parasites cause this: Ascaris lumbricoides, Trichuris trichiura, and hookworms Ancylostoma duodenale and Necator americanus. This narrative review aims to synthesize the existing literature on the epidemiology, pathogenesis, and clinical manifestations of micronutrient malnutrition—iron, Vitamin A, and zinc to explain its relationship with STH infections. Research journals and articles were retrieved from PubMed and Google Scholar. Search terms include “helminthiasis or STH infection or soil-transmitted helminth infection,” “malnutrition or nutrition deficiency,” “iron or ferritin or hemoglobin,” “Vitamin A or retinol,” and “zinc.”. STH infections can cause micronutrient malnutrition. The existing relationship between STH and micronutrient malnutrition is a significant burden, mainly in children or pregnant women who reside in rural communities of developing countries. Iron deficiency is the most common micronutrient malnutrition manifested in infected populations, mainly in pregnant women. In contrast, Vitamin A deficiency occurs more often in children than in pregnant women. The least common of all micronutrient malnutrition occurring in STH-infected individuals is zinc deficiency. However, since only a few studies have conducted additional assessments for other possible contributing factors (e.g., diet intake, underlying genetic conditions), further research is needed to elucidate the complex interplay of other determinants and risk factors involved in this health scenario.
Out across the plains of South Dakota, over 1,500 bison (Bison bison bison) were rounded up recently as part of efforts to protect the species and maintain the health of the herd. Every year, the Custer State Park holds this annual health check to make sure the bison are thriving and help to vaccinate the year's new calves. Moving these animals, where the males can reach as tall as 1.82 meters (6 feet) and weigh approximately 900 kilograms (2,000 pounds), is no small matter – and keeping the species safe is vitally important. Bison used to be plentiful across the United States before hunters, soldiers, and tourists brought the numbers close to extinction. There were at least 10 million bison in the southern herd of the North American plains in 1870 – but in less than 20 years, this had plummeted to only 500 wild specimens. This great slaughter had terrible knock-on effects not just for the ecosystem but for the Native Americans who relied on this species. “Now, after more than a century of conservation efforts, there are more than 500,000 bison in the United States,” South Dakota Gov. Kristi Noem, a horseback rider who took part in the roundup told the Associated Press. “The Custer State Park bison herd has contributed greatly to those efforts.” The herd at the Custer State Park started with just 36 animals in 1914, but numbers have risen over the years and the park now has around 1,500 animals. Each year, the round-up allows the officials to check on the health of the animals and decide which individuals will be sold to other parks. Around 400 calves are born in the park each year. “Each year we sell some of these bison to intersperse their genetics with those of other herds to improve the health of the species’ population across the nation,” Noem said. In South Wyoming, a rare white calf was born to the bison herd this year, while across the pond in the United Kingdom, European bison (Bison bonasus) are being reintroduced slowly to a small area of the Kent countryside with the hope of all the rewilding benefits these creatures can bring.
English Grammar Rules In general the plural of a noun is formed by adding -S to the noun. 1. When the noun ends in S, SH, CH, X or Z*, we add -ES to the noun. - I have a box in my bedroom. - I have three boxes in my bedroom. * With words that end in Z sometimes we add an extra Z to the plural form of the word (such as with the plural of quiz). 2. When the noun ends in a VOWEL + Y, we add -S to the noun. 3. When the noun ends in a CONSONANT + Y, we remove Y and add -IES to the noun. 4. If the noun ends in F or FE, we remove the F/FE and add -VES to the noun. Some exceptions: roof - roofs, cliff - cliffs, chief - chiefs, belief - beliefs, chef - chefs 5. If the noun ends in a CONSONANT + O, we normally add -ES to the noun. Some exceptions: piano - pianos, halo - halos, photo - photos NOTE: Volcano has two correct forms of plural. Both volcanos and volcanoes are accepted. 6. There are a number of nouns that don't follow these rules. They are irregular and you need to learn them individually because they don't normally have an S on the end. - There is a child in the park. - There are many children in the park. 7. There are some nouns in English that are the same in the singular and the plural. - I can see a sheep in the field. - I can see ten sheep in the field. Sometimes you will hear the word fishes (especially in songs) though it is grammatically incorrect. The following is a summary chart of basic plural noun rules: The next rules are a lot more advanced and even native speakers have difficulty with these. Unless you are an advanced student, I wouldn't recommend learning them just now. 8. If the noun ends in IS, we change it to ES. Words that end in IS usually have a Greek root. 9. If the noun ends in US, we change it to I. Words that end in US usually have a Latin root. Some exceptions: octupus - octupuses (because it is from Greek, not Latin), walrus - walruses
What is an abbreviation? An abbreviation is a shortened form of a word or phrase. Some examples of abbreviations include: Doctor - Dr. Mister - Mr. Missus - Mrs. Etc. - Etcetera Cont. - Continued Why are abbreviations used? Abbreviations are used to shorten a word or phrase to make it easier on the writer. Let us take the following exapmle: Professor Hellen and Mister Wilson were discussing Mister Henson's overall performance in tidiness, punctuality, etcetera. This could easily be shortened by making the following abbreviations: Professor = Prof. Mister = Mr. Etcetera = Etc. Let us take a look at the revised sentence: Prof. Hellen and Mr. Wilson were discussing Mr. Henson's overall performance in tidiness, punctuality, etc. It makes the writing a lot easier. Here are a few links to activities that deal with abbreviations: Things learned in this packet: Abbreviations are shortened forms of words or phrases that are used to make the writing easier. Abbreviations are used for the following: Abbreviations are not used: Periods are used for: They are NOT used in: Source: See above sections for sources.
Geologists Chris von der Borch and Dave Mrofka collect sediment samples in South Australia. These rocks hold clues to help explain why climate changed 635 million years ago. Click on image for full size Courtesy of Martin Kennedy, UCR Scientists Search for the Cause of Ancient Global Warming News story originally written on May 28, 2008 Earth’s climate is warming quickly now. We know that this has to do with additional greenhouse gases in the atmosphere and other global changes. But there is a lot we don’t yet know about how warming will affect our planet. How could we know? We’ve never been through this before, have we? Actually, even through we humans have never experienced fast global warming, our planet has. And our planet keeps records of what happened. The oldest records that the Earth keeps are in its rocks. Looking through those records of our planet, geologist Martin Kennedy searched for evidence of ancient climate changes in very old sedimentary rocks. He was interested in learning more how and why rapid global warming happened 635 million years ago. Kennedy and two other scientists, David Mrofka and Chris von der Borch, collected hundreds of sediment samples from rocks in South Australia. Each sample of sediment was studied with stable isotope analysis, an important tool used to understand climates of the past. "Our findings document an abrupt and catastrophic global warming that led from a very cold, seemingly stable climate state to a very warm, also stable, climate state--with no pause in between," said Kennedy. Earth had been covered by a thick ice sheet for millions of years before the warming started 635 million years ago. Their research suggests that a little warming caused the ice sheets to collapse. This released a large amount of the greenhouse gas methane into the atmosphere, which had been in a frozen icy form under the ice sheets. The methane increased global warming rapidly. Today, methane is in Arctic permafrost and beneath the oceans. Researchers believe that these sinks of methane will remain where they are unless triggered by global warming. It's possible that very little warming could unleash this trapped methane, which could warm the Earth tens of degrees. Last modified June 13, 2008 by Lisa Gardiner. You might also be interested in: The climate where you live is called regional climate. It is the average weather in a place over more than thirty years. To describe the regional climate of a place, people often tell what the temperatures...more Even though only a tiny amount of the gases in Earth’s atmosphere are greenhouse gases, they have a huge effect on climate. There are several different types of greenhouse gases. The major ones are carbon...more Isotopes are different "versions" of an element. All atoms of an element have the same number of protons. For example, all hydrogen atoms have one proton, all carbon atoms have 6 protons, and all uranium...more For a glacier to develop, the amount of snow that falls must be more than the amount of snow that melts each year. This means that glaciers are only found in places where a large amount of snow falls each...more Methane is a kind of gas. There is a small amount of methane in the air you breathe. A methane molecule has carbon and hydrogen atoms in it. Methane is a greenhouse gas. That means it helps make Earth...more When the ground under your feet is frozen, interesting things can happen. The land may be covered with circles, polygons, or stripes, called patterned ground, which form as the land freezes. Trees may...more Scientists have learned that Mount Hood, Oregon's tallest mountain, has erupted in the past due to the mixing of two different types of magma. "The data will help give us a better road map to what a future...more
Watch the video or read the article below: TI 83 Central Limit Theorem: Overview The Central Limit Theorem (CLT) is a way to show characteristics for the “sampling distribution of the means,” taken from a “parent population” which is created from the means of an infinite number of random population samples of size (N). It tells us that the distribution of means will be approximately normally distributed as N gets larger. In addition, the mean of the sampling distribution of the means and the standard deviation of the population means will equal the mean and standard deviation of the parent population. The TI 83 calculator has a built in function that can help you calculate probabilities of central theorem word problems, which usually contain the phrase “assume the distribution is normal” (or a variation of that phrase). The function, normalcdf, requires you to enter a lower bound, upper bound, mean, and standard deviation. TI 83 Central Limit Theorem: Steps Sample problem: A fertilizer company manufactures organic fertilizer in 10 pound bags with a standard deviation of 1.25 pounds per bag. What is the probability that a random sample of 15 bags will have a mean between 9 and 9.5 pounds? Step 1: 2nd VARS, 2. Step 2: Enter your variables (lower bound, upper bound, mean, and standard deviation): 9 , 9 . 5 , 1 0 , ( 1 . 2 5 / 2nd x2 1 5 ) ). Step 3: Press ENTER. This returns the probability of .05969, or .05969%. Tip:If you have a question that asks for “greater than” or “less than” a certain number, enter 999999999 for the lower or upper bound. For example, if you wanted to know the probability of greater than 8 pounds you would enter: Less than 8 pounds you would enter: Tip: Sampling distributions require that the standard deviation of the mean is σ / √(n), so make sure you enter that as the standard deviation. Lost your guidebook? Download a new one here from the TI website. Questions? Ask on our FREE forum. Our resident stats guy will be happy to answer those tricky stats problems. Check out our Youtube channel for more stats help and tips!
DescriptionThe Fourth Edition of this well-received text on the principles of geographic information systems (GIS) continues the author's style of "straight talk" in its presentation and is written to be accessible and easy to follow. Unlike most other GIS texts, this book covers GIS design and modeling, reflecting the belief that modeling and analysis are at the heart of GIS. This approach helps students understand and use GIS technology. Chapter 1: Introduction to Digital Geography. Unit 2. Digital Geographic Data and Maps. Chapter 2. Basic Geographic Concepts. Chapter 3. Map Basics. Chapter 4. GIS: Computer Structure Basics. Chapter 5. GIS Data Models. Unit 3. Input, Storage, and Editing. Chapter 6. GIS Input. Chapter 7. Data Storage and Editing. Unit 4. Spatial Analysis. Chapter 8. Query and Description. Chapter 9. Measurement. Chapter 10. Classification. Chapter 11. Statistical Surfaces. Chapter 12. Terrain Analysis. Chapter 13. Spatial Arrangement. Chapter 14. Map Overlay. Chapter 15. Cartographic Modeling. Unit 5. GIS Output. Chapter 16. Cartography and Visualization. Unit 6. GIS Design Issues. Chapter 17. GIS Design. - New introductory chapter, Spatial Learner s Permit makes this text even more accessible to students without GIS backgrounds. - More in-depth examination of the underlying computer science behind GIS. - Expanded coverage of the increasingly robust literature on cartographic visualization. - Explicit learning objectives at the beginning of each chapter detail precisely what students will be expected to know, allowing professors to create lesson plans and strategies to match. - End-of-chapter questions test students' knowledge of key concepts.
Physics with Calculus/Mechanics/Energy Kinetic energy is the energy of a mass in motion. In the non-relativistic approximation, kinetic energy is equal to - where m is the mass of the object and v is its velocity. Potential energy in a constant gravitational field is given by: - where m is the mass of the object, g is the strength of the gravitational field ( on earth) and h is the height of the object. Work-Kinetic energy relation Potential energy, kinetic energy relationship The law states that P_E (potential energy) is the energy of a given mass and position a certain amount of energy at such a height. When the object is in motion, K_E (kinetic energy), the potential energy is than transfered to kinetic energy. Since due to the third law of thermodynamics which states that energy can not be created nor destroyed, but only tranfered, such transfer of energy can occur For a given equation, in order to figure out the work of the position, it can be done one of either two ways. The calculus method (which involves integration of the function) and the algebraic way (which involves the work kinetic energy relationship) Calculus method (ex)-the compression of a spring from 1 m to 4 meters Since integration basically finds the area of the given function (which can also be shown by the graph if possible). If F = Kx, where F is the force, K is the force constant, and x is the distance it was compressed. If the original function is F = Kx, since K is a constant, this than becomes K*(integral)X which then becomes K*x^2/2. In order to figure out the work due to a changing amount of velocity, first determine how much "energy" is in the given system. Your equation from now on will be mgh + 1/2mV^2 = energy at full height.
5.2. The evolution of galaxies in clusters The importance of collisions for the evolution of cluster galaxies was understood quite early. Since a cluster of galaxies is a dense environment, ``collisions must necessarily enter as a factor in the evolution of the system'' (Shapley , 1935). In 1937 Zwicky imagined that collisions might lead to the disruption of certain types of nebulæ, which could explain why the morphological mix of cluster galaxies is different from the field. The first observational evidence for this effect came only thirty years later, when Reaves found that dwarf galaxies avoid the cluster centres. In 1943 Chandrasekhar developed his theory of ``dynamical friction'', ``the systematic decelerating effect of the fluctuating field of force acting on a star in motion''. Chandrasekhar derived his formula on the basis of the two-body approximation for stellar collisions. More than thirty years later, with the discovery of massive halos around galaxies, Lecar suggested that galaxies gradually settle to the cluster centres by dynamical friction through a sea of tidally-stripped galaxy halos. The validity of Chandrasekhar's formula was confirmed through numerical simulations by White [492, 493]. In 1940 Holmberg had remarked that spirals must transform into ellipticals, if clusters form by the capture of field galaxies. Spitzer & Baade , in 1951, were the first to suggest collisions as a mechanism to transform a galaxy type into another. They thought that collisions would affect primarily the gas content of a galaxy, and not so much its stellar structure, leading to the formation of irregular galaxies. A year later Zwicky found evidence for intergalactic matter in small galaxy groups, and attributed it to material stripped from galaxies during close encounters. This was confirmed 20 years later by the simulations of Toomre & Toomre . Spitzer & Baade's analysis was revised twice between 1963 and 1965. First Aarseth revised downward Spitzer & Baade's estimate of the number of galaxy-galaxy collisions, as a consequence of the revised distance scale. Then, Alladin revised upwards Spitzer & Baade's estimate of the internal energy change of a galaxy during a collision. In 1970 Tinsley developed her theory for the evolution of the spectral energy distribution of galaxies and showed that strong evolutionary corrections were to be expected for the colours of ellipticals, because of the aging of the stellar population (13). The following year, Oke devised to compare the colours of nearby and distant cluster ellipticals with evolutionary models, and thus infer their (photometric) redshifts. Figure 32. Contours of X-ray emission around the galaxy M 86 in Virgo. The extended emission was interpreted as evidence for ram pressure stripping of hot gas from the galaxy. From Forman et al. (1979). In 1972 Rood et al. noticed that the Coma cluster S0s were not confined to the cluster core, where collisions were expected to be most effective, and questioned the validity of the collision model for the formation of lenticular galaxies. In the same year, Gunn & Gott and Larson presented two alternative models for the evolution of galaxy morphologies. Gunn & Gott proposed ram pressure stripping of the interstellar gas by the hot IC medium as a mean of transforming spirals into S0s. The first direct observational evidence of such an effect came seven years later, with Forman et al. 's X-ray observations of the Virgo galaxy M 86 - see Fig. 32. Larson, on the other hand, suggested a relation between the morphological type of a galaxy, and the collapse time of the gas during galaxy formation. Galaxies with a short collapse time would have their material used up early, leading to old stellar populations and little gas left (like in ellipticals and S0s). The morphology-density relation could then follow by relating the collapse time to the ambient density. According to Oemler , the ``birthrate of elliptical galaxies [...] increases with density relative to the other galaxy types'', and collisions may be sufficient to transform spirals into S0s but not into ellipticals. Larson's ideas were later developed by Gott & Thuan . In 1975 Biermann & Tinsley remarked upon the similarity of the colours of ellipticals and S0s. This implies that ellipticals and S0s have similar stellar populations, and therefore similar old ages, so that a recent transformation of spirals into S0s is out of question. The issue is certainly not closed, with independent evidences in favour and against an ancient origin of S0s. In 1976 White 's n-body simulations showed that the formation process of a cluster leads to an increasing ellipticity of galaxy orbits with clustercentric radius, i.e. radial motions are predominant in the outer cluster regions. The observations of Moss & Dickens seemed to confirm White's findings. Moss & Dickens observed that late-type galaxies have a higher velocity dispersion than early-types, and interpreted it as an evidence for an infalling population of field galaxies into the clusters. Recently Biviano et al. have shown that emission-line galaxies in clusters are characterized by predominantly radial orbits. A thorough determination of the orbits of different types of cluster galaxies, through the solution of the Jeans equation, is in preparation . White [491, 492]'s simulations also showed that a marginal mass segregation can establish in clusters through dynamical friction. Merging of the slowed-down galaxies would then follow in the cluster core, eventually with the formation of a cD galaxy (see Section 5.4). Struble 's observation of a low velocity dispersion in the core of some galaxy clusters was taken as supporting evidence for these effects. A few years later Roos & Aarseth re-examined the issue of mass segregation by running n-body simulations of a galaxy system with a Schechter-like distribution of galaxy masses. They noted that segregation establishes in subclusters before these merge to form the final cluster. Segregation is then conserved while the cluster evolves, because tidal stripping predominantly affects the outer regions of subclusters. Such an evolutionary scenario was found to be consistent with Capelato et al. 's observations of luminosity segregation in Coma, and with recent analyses of the Coma cluster structure [300, 68]. In 1980 Dressler noted that ram-pressure stripping could not account for the different bulge-to-disk ratios of spirals and S0s. Richstone and Marchant & Shapiro had already shown that collisions of spirals can fatten the galaxy disks, so that Dressler's observation was not a problem in the collision scenario. Farouki & Shapiro 's simulations showed however that also the ram-pressure mechanism would lead to a thickening of the galaxy disks. Finally, in 1982 Nulsen noted that other interaction mechanisms between cluster galaxies and the hot IC gas medium (viscosity, thermal conduction, turbulence) could be even more effective than ram-pressure in stripping gas from galaxies. In 1980, Larson et al. noted that if star formation continued in galaxy disks at the rate determined in the local Universe, spirals would run out of gas in a relatively short time. Disk replenishment of gas is therefore needed. An early generation of spirals, formed in high density regions, would be characterized by small disks, and such spirals could evolve into nowadays S0s by the loss of their gaseous halos through collisions. According to Roos & Norman 's n-body simulations, ellipticals could instead form via mergers during the early stage of cluster collapse, before the dispersion of galaxy velocities becomes too high. Figure 33. The V-R colour distribution of galaxies in the cluster Cl0024+1654 (left), and in the cluster 3C295 (middle). Different shadings correspond to subsamples of galaxies at different distances from the cluster centres. The B-V distribution of galaxies in the Coma cluster (right). Solid area: ellipticals; hatched area: S0s; remainder: spirals. From Butcher & Oemler (1978a). All these theoretical efforts to determine the evolution of galaxies received a formidable acceleration with the first direct observational evidence for the evolution of the cluster galaxy population. In 1978, Butcher & Oemler published the first of a series of papers on The evolution of galaxies in clusters. Their photometric observations of two regular, centrally concentrated, z 0.4 clusters, showed an excess of blue galaxies, as compared to nearby rich clusters - see Fig. 33. Butcher & Oemler [85, 86] noted that such a high fraction of blue galaxies was more typical of nearby poor irregular clusters like Hercules. They later confirmed their finding through photometric observations of seven more clusters at redshifts beyond 0.2 (Butcher et al. ). Butcher & Oemler's result was greeted with much scepticism. Even before Butcher & Oemler's paper was published, Baum (in the discussion following a talk of Spinrad ) suggested that their result could be due to contamination by field galaxies. Koo imaged another distant cluster, where he did not find evidence for the Butcher-Oemler effect. Mathieu & Spinrad re-examined the fraction of blue galaxies in one of Butcher-Oemler clusters, and showed it to be much lower than originally estimated. Lucey's critical ``assessment of the completeness and correctness of the Abell catalogue'' led him to conclude that the Butcher-Oemler effect was due to an erroneous assignment of cluster membership. Theorists were however not discouraged by potential observational biases. In the models of Norman & Silk and Himmes & Biermann the IC gas gradually build-up from the gas stripped through collisions of cluster galaxies. Norman & Silk noted that such a gradual built-up of the IC gas can delay the effectiveness of ram-pressure stripping until z ~ 0.5. If ram-pressure transforms spirals into S0s, this would explain the excess of spirals in high-redshift clusters. However, Henry et al. 's X-ray observations showed the existence of a dense IC medium in one of the clusters showing the Butcher-Oemler effect. In 1982, 1983, Dressler & Gunn [144,145] finally performed spectroscopic observations of galaxies in Butcher-Oemler clusters. The fraction of blue galaxies which are cluster members was found to be lower than predicted by Butcher & Oemler, but still higher than in nearby rich clusters. The Butcher-Oemler effect was confirmed. More than twenty years after the original discovery, the Butcher-Oemler effect is well established (see ELLINGSON, MARGONINER, these proceedings), and a considerable progress has been made in determining the nature of the excess blue galaxies (see, e.g., Poggianti et al. ). The physical mechanisms responsible for the evolution of cluster galaxies are not yet determined with certainty, but it is likely that collisions, as initially suggested by Shapley , are of fundamental importance (see MOORE, KAUFFMAN, LANZONI, these proceedings).
For some years now, research has been conducted on state educational standards used across the United States. Since state standards may vary considerably, students complete their public education based on academic standards that are different from state to state. This is a disservice to students entering college or the work place in this rapidly changing and increasingly global economy. The culmination of this research is the new Common Core Standards for English language arts and math, K-12. In a nutshell, this state-led initiative is a result of the joint efforts of an extremely diverse group of educators, experts, parents, school administrators, and community leaders across the country, coordinated through their membership in the Council of Chief State School Officers and the National Governors Association Center for Best Practices. The new standards have been benchmarked to international standards "to guarantee that our students are competitive in the emerging global marketplace." It's interesting to note that the federal government was not involved in the development of the standards, nor will it be involved in the implementation of the new standards; this has been a state-led initiative from the beginning. A wealth of information can be found at the Common Core State Standards Initiative's website, located at www.corestandards.org. At this point almost all of the states, along with the District of Columbia, have adopted the new Common Core Standards, some through their state boards of education, some through their state legislatures. The Guiding Principles of the New Mexico Common Core are outlined at the NM Public Education Department website (newmexicocommoncore.org) and include: - preparing students with the knowledge and skills they need to succeed in education and training after high school; - ensure students are globally competitive; - improve equity and economic opportunity for all students by having consistent expectations for achievement for all students; - clarify so that parents, teachers and students understand what is expected of them; - and, collaborate across districts and with other states for sharing of resources, expertise in materials development, teacher professional development and student exams based upon best practices. The Common Core Standards are challenging, including rigorous content and application of knowledge through higher order skills, with the intent constituting "a different approach to learning, teaching, and testing that engenders a deeper understanding of critical concepts and the practical application of that knowledge." Information about New Mexico's transition to the Common Core is available through the NM PED website, along with a timeline for implementing these standards over the course of the next three years. Teachers in Clovis are receiving professional development surrounding the Common Core State Standards. Kindergarten through third grade students will begin the transition to the standards in the fall of 2012. Students in grades 4-12 will begin the transition in the fall of 2013. Common Core information for Clovis Municipal Schools can be found at www.clovis-schools.org/instruction/curriculum.html, or, feel free to call the Instruction Department at Central Office, 769-4300. Cindy Kleyn-Kennedy is the instructional technology coordinator for the Clovis Municipal Schools and can be reached at: email@example.com
Climate change and children's rights The Human Rights Council adopted a resolution during its 32nd session on human rights and climate change with a focus on children's rights. It recognises that children are among the most vulnerable to climate change, and that this may have a serious impact on their rights. It may impact on children's rights to enjoy the highest attainable standard of physical and mental health, access to education, adequate food, adequate housing, safe drinking water and sanitation. According to the text, the Human Rights Council will hold a panel discussion at its 34th session and the Office of the High Commissioner for Human Rights will conduct a "detailed analytical study on the relationship between climate change and the full and effective enjoyment of the rights of the child". The resolution also refers to the day of general discussion of the Committee on the Rights of the Child on children's rights and the environment to be held on 23 September 2016. - The Human Rights Council adopted resolution on human rights and climate change. - The Day of General Discussion on children's rights and the environment. - Together's written response to the Day of General Discussion on children's rights and the environment. In addition, CRIN is calling on the UN Committee on the Rights of the Child to address children's access to justice in the context of the environment as part of its Day of General Discussion. Climate change, pollution, environmental degradation and resource depletion have a disproportionate effect on the quality of life of current and future generations of children. Furthermore, children's bodies are particularly susceptible to adverse effects of environmental harm because exposure occurs during sensitive periods of development and their young age means they will have to live with any consequences for longer. Ensuring children's access to justice in this context can secure redress for violations already incurred and prevent their recurrence. States should in particular establish collective and public interest action mechanisms; ensure NGOs have standing to file and intervene in legal proceedings in the interests of children affected now and on behalf of future generations; and enshrine the justiciable right to a clean environment in domestic law. Sign up to our e-Newsletter Get the very latest on children’s rights by following us on Twitter.
(Phys.org) —Now approaching its 10th anniversary, NASA's Spitzer Space Telescope has evolved into a premier observatory for an endeavor not envisioned in its original design: the study of worlds around other stars, called exoplanets. While the engineers and scientists who built Spitzer did not have this goal in mind, their visionary work made this unexpected capability possible. Thanks to the extraordinary stability of its design and a series of subsequent engineering reworks, the space telescope now has observational powers far beyond its original limits and expectations. "When Spitzer launched back in 2003, the idea that we would use it to study exoplanets was so crazy that no one considered it," said Sean Carey of NASA's Spitzer Science Center at the California Institute of Technology in Pasadena. "But now the exoplanet science work has become a cornerstone of what we do with the telescope." Spitzer views the universe in the infrared light that is a bit less energetic than the light our eyes can see. Infrared light can easily pass through stray cosmic gas and dust, allowing researchers to peer into dusty stellar nurseries, the centers of galaxies, and newly forming planetary systems. This infrared vision of Spitzer's also translates into exoplanet snooping. When an exoplanet crosses or "transits" in front of its star, it blocks out a tiny fraction of the starlight. These mini-eclipses as glimpsed by Spitzer reveal the size of an alien world. Exoplanets emit infrared light as well, which Spitzer can capture to learn about their atmospheric compositions. As an exoplanet orbits its sun, showing different regions of its surface to Spitzer's cameras, changes in overall infrared brightness can speak to the planet's climate. A decrease in brightness as the exoplanet then goes behind its star can also provide a measurement of the world's temperature. While the study of the formation of stars and the dusty environments from which planets form had always been a cornerstone of Spitzer's science program, its exoplanet work only became possible by reaching an unprecedented level of sensitivity, beyond its original design specifications. Researchers had actually finalized the telescope's design in 1996 before any transiting exoplanets had even been discovered. The high degree of precision in measuring brightness changes needed for observing transiting exoplanets was not considered feasible in infrared because no previous infrared instrument had offered anything close to what was needed. Nevertheless, Spitzer was built to have excellent control over unwanted temperature variations and a better star-targeting pointing system than thought necessary to perform its duties. Both of these foresighted design elements have since paid dividends in obtaining the extreme precision required for studying transiting exoplanets. The fact that Spitzer can still do any science work at all still can be credited to some early-in-the-game, innovative thinking. Spitzer was initially loaded with enough coolant to keep its three temperature-sensitive science instruments running for at least two-and-a-half years. This "cryo" mission ended up lasting more than five-and-a-half-years before exhausting the coolant. But Spitzer's engineers had a built-in backup plan. A passive cooling system has kept one set of infrared cameras humming along at a super-low operational temperature of minus 407 degrees Fahrenheit (minus 244 Celsius, or 29 degrees above absolute zero). The infrared cameras have continued operating at full sensitivity, letting Spitzer persevere in a "warm" extended mission, so to speak, though still extremely cold by Earthly standards. To stay so cool, Spitzer is painted black on the side that faces away from the sun, which enables the telescope to radiate away a maximum amount of heat into space. On the sun-facing side, Spitzer has a shiny coating that reflects as much of the heat from the sun and solar panels as possible. It is the first infrared telescope to use this innovative design and has set the standard for subsequent missions. Fully transitioning Spitzer into an exoplanet spy required some clever modifications in-flight as well, long after it flew beyond the reach of human hands into an Earth-trailing orbit. Despite the telescope's excellent stability, a small "wobbling" remained as it pointed at target stars. The cameras also exhibited small brightness fluctuations when a star moved slightly across an individual pixel of the camera. The wobble, coupled with the small variation in the cameras, produced a periodic brightening and dimming of light from a star, making the delicate task of measuring exoplanet transits that much more difficult. To tackle these issues, engineers first began looking into a source for the wobble. They noticed that the telescope's trembling followed an hourly cycle. This cycle, it turned out, coincided with that of a heater, which kicks on periodically to keep a battery aboard Spitzer at a certain temperature. The heater caused a strut between the star trackers and telescope to flex a bit, making the position of the telescope wobble compared to the stars being tracked. Ultimately, in October 2010, the engineers figured out that the heater did not need to be cycled through its full hour and temperature range—30 minutes and about 50 percent of the heat would do. This tweak served to cut the telescope's wobble in half. Spitzer's engineers and scientists were still not satisfied, however. In September 2011, they succeeded in repurposing Spitzer's Pointing Control Reference Sensor "Peak-Up" camera. This camera was used during the original cryo mission to put gathered infrared light precisely into a spectrometer and to perform routine calibrations of the telescope's star-trackers, which help point the observatory. The telescope naturally wobbles back and forth a bit as it stares at a particular target star or object. Given this unavoidable jitter, being able to control where light goes within the infrared camera is critical for obtaining precise measurements. The engineers applied the Peak-Up to the infrared camera observations, thus allowing astronomers to place stars precisely on the center of a camera pixel. Since repurposing the Peak-Up Camera, astronomers have taken this process even further, by carefully "mapping" the quirks of a single pixel within the camera. They have essentially found a "sweet spot" that returns the most stable observations. About 90 percent of Spitzer's exoplanet observations are finely targeted to a sub-pixel level, down to a particular quarter of a pixel. "We can use the Peak-Up camera to position ourselves very precisely on the camera and put light right on the best part of a pixel," said Carey. "So you put the light on the sweet spot and just let Spitzer stare." These three accomplishments—the modified heater cycling, repurposed Peak-Up camera and the in-depth characterization of individual pixels in the camera—have more than doubled Spitzer's stability and targeting, giving the telescope exquisite sensitivity when it comes to taking exoplanet measurements. "Because of these engineering modifications, Spitzer has been transformed into an exoplanet-studying telescope," said Carey. "We expect plenty of great exoplanetary science to come from Spitzer in the future." Explore further: Spitzer telescope celebrates ten years in space More information: www.nasa.gov/spitzer
Ever since the time of the Industrial Revolution, human activities have caused severe detriment to the environment. Fossil fuels used in energy production emit greenhouse gasses which are the primary cause of global warming and climate change. Governments around the world have invested billions of dollars in the research of new ground-breaking technologies which offer alternative energy sources. Scientists and inventors have been hard at work to develop ideas that can help protect our planet for more generations to come. Below are a sampling of these innovations. 10. LED Lights LED stands for "light-emitting diode". It is a relatively new technology used in the manufacturing of light bulbs. LED lights produce the most energy-efficient form of lighting with the bulbs consuming about 10% of the energy used in the traditional bulbs (incandescent lamps). LED lamps have a lifespan of over 30,000 hours of illumination, an impressive figure compared to the 8,000 hours offered by traditional bulbs. 9. Landfill gas Landfills are areas where public waste is dumped and in some occasions, where recycling of the waste is carried out. Due to the various chemical reactions in the waste, landfills produce high amounts of greenhouse gasses with methane gas and carbon dioxide being the majority. Experts have been studying the ways of utilizing these gasses and have come up with ways to tap these gasses and use them for energy production through combustion. The US has invested heavily in this new technology and has 399 projects harnessing landfill gasses to produce energy. 8. Wind energy Wind is another important source of green energy and has limitless potential. Wind power is seen as a sustainable alternative to fossil fuels due to its renewability. Wind turbines used to harness the kinetic energy from wind occupy a relatively small area and have minimal environmental effects. More countries around the world have been embracing the use of wind power, which currently accounts for 4% of the total global electricity production. In the European Union, 44% of all new power generated in 2015 was from wind power. 7. Ocean thermal energy conversion Ocean thermal energy conversion is a process where energy is produced from the variation in ocean water temperatures. Ocean thermal energy conversion is still in its infancy, but great strides are being made with millions of dollars invested in research. Scientists see the technology as having great potential with several countries conducting pilot projects. Ocean thermal energy conversion produces green energy and has many crucial by-products as well including fresh water used for irrigation and domestic purposes. Currently, Japan is the only country in the world with an operational ocean thermal energy conversion plant located on Kume Island. 6. Better nuclear energy Nuclear energy is one of the most important sources of energy in the world. Developed countries have set up several nuclear plants which produce energy through nuclear fission with the key component being uranium. However, the amount of ore from which uranium is mined has been declining. The nuclear disasters in Chernobyl and Japan have made scientists look for alternative elements for the nuclear fission process. Thorium has become the new frontier in nuclear energy due to its abundance in the Earth’s crust and produces far less nuclear waste making the element the best alternative, and thorium-based nuclear energy is seen as the future of nuclear power. 5. Fuel cell technology Fuel cell technology is the use of fuel cells to produce sustainable energy. A fuel cell uses chemical reactions to produce electricity and is comprised of two electrodes and an electrolyte. The use of fuel cell technology is an area being looked at by scientists as an alternative to fossil fuels. The technology is commonly used in traditional car batteries, and experts are looking into ways of scaling up the technology. 4. More adaptable solar panels Solar energy is one of the most important sustainable energy sources and has limitless potential. However, the type of technology used in developing solar panels limits the amount of energy produced. Solar panel technology has grown in leaps and bounds over the years and has been accepted in many countries all over the world. However, the technology is usually quite expensive to set up when compared to other energy sources. Researchers are therefore working tirelessly to seek ways to reduce this high cost as well as make the solar panels more energy-efficient. Some of the technologies looked into including the use of pyrite in the manufacturing of solar panels. 3. Fual-efficient vehicles In many developed economies, vehicles are one of the biggest contributors to greenhouse gas emissions. In the past, environmentalists have been vocal in their criticism against car manufacturers calling for the manufacture of eco-friendly vehicles. This criticism has yielded in the recent upsurge in the manufacture of fuel-efficient vehicles. For many car manufacturers in Europe, the United States, and Japan, fuel-efficiency is one of the most important factors they look at before releasing a car to the market. To curb the greenhouse-gas emissions, these fuel-efficient cars must be accessible to the general public. 2. Better cooling and heating systems The traditional ways of temperature regulation in buildings is through the use of air conditioners, many of which consume massive amounts of energy from non-renewable sources. Some air conditioners use fossil fuels which emit greenhouse gasses into the atmosphere. However, there are many other “green” alternatives which can be used as cooling and heating systems. The most popular of the alternatives is the use of solar energy where various systems are incorporated into the building’s design to convert sunlight into energy. 1. Better building insulation North America is notorious for its harsh winter conditions where temperatures often drop below zero. Homes built in such harsh environments employ various forms of insulation to insulate the houses from the freezing winter temperatures. Of the many insulating systems used, spray foam insulation is one of the most preferred options. However, in recent years, experts are staying away from the use of spray foam insulation as well as other tradition insulating methods and are looking for more eco-friendly insulation systems. One of the easiest eco-friendly methods of insulation is the use of cotton denim-a recycled industrial waste, as well as the cellulose insulation where Cellulose is usually made from recycled papers. By using these new insulation technologies, homeowners help in reducing the amount of waste dumped in landfills.
Learn something new every day More Info... by email Autoregulation is a biological term used to describe processes through which some biological systems are capable of regulating themselves. Autoregulation is most clearly exemplified by the distribution of blood and oxygen throughout the bodies of many different animals. Changes in external conditions and stimuli cause the systems governing blood flow to focus the flow of blood, and therefore oxygen, where it is needed the most. When necessary, blood vessels can constrict or dilate and heart rate can increase or decrease to moderate blood pressure throughout the body. This is of particular importance in the brain, where blood pressure must remain within a relatively small range to avoid damage. In order to fully understand the importance of autoregulation, one must first understand the concept of homeostasis. Homeostasis, as applied to biological systems, is a natural, stable balance in which the system is able to maintain stable regulation regardless of external conditions. Processes such as the consumption of nutrients, formation of energy, and formation and distribution of proteins all contribute to homeostasis. Wild changes in energy consumption, nutrient distribution, or even temperature regulation can cause significant harm to an organism, so regulatory mechanisms are necessary to ensure the necessary balance is maintained. Autoregulation is one such mechanism through which particular biological systems are able to regulate themselves. Autoregulation in the brain, referred to as cerebral autoregulation, is extremely critical because of the brain's importance and fragile nature. It requires a steady and constant flow of oxygen to remain functional and even brief periods of significant variance can be quite harmful. The specific purpose of this regulation is to maintain an unchanging flow of blood to the brain even when blood pressure fluctuates. Factors such as resistance, flow, and pressure are all important factors in determining the rate of blood flow in the brain. When one changes, others can generally adjust to compensate for the change without the need for external factors, such as hormones or neural signals. The brain is not the only organ that contains autoregulatory mechanisms. The heart and kidneys are also capable of regulation without the need for chemical or neural triggers. The particular mechanisms of autoregulation tend to be quite similar and are generally closely linked to blood pressure, flow, and resistance. These autoregulation systems are highly important, if not absolutely necessary, in sensitive organs that need to maintain a precise, constant flow of blood to avoid damage. The organ itself is capable of regulation based on immediate factors without needing to depend on chemical or electrical intermediates that could be misdirected by other processes in the body. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
What causes sinusitis? Anything that causes swelling in your sinuses or keeps the cilia from moving mucus can cause sinusitis. This can occur because of changes in temperature or air pressure. Allergies can cause sinusitis. Using decongestant nasal sprays too much, smoking, swimming or diving can also increase your risk of getting sinusitis. Some people have growths called polyps (say: “pawl-ips”) that block their sinus passages and cause sinusitis. When sinusitis is caused by a bacterial or viral infection, you get a sinus infection. Sinus infections sometimes occur after you’ve had a cold. The cold virus attacks the lining of your sinuses, causing them to swell and become narrow. Your body responds to the virus by producing more mucus, but it gets blocked in your swollen sinuses. This built-up mucus is a good place for bacteria to grow. The bacteria can cause a sinus infection. Written by familydoctor.org editorial staff
Why is the ninth month called September? Old English cetil (Mercian), from Latin catillus "deep pan or dish for cooking," diminutive of catinus "bowl, dish, pot." A general Germanic borrowing (cf. Old Saxon ketel, Old Frisian zetel, Middle Dutch ketel, Old High German kezzil, German Kessel). Spelling with a -k- (c.1300) probably is from influence of Old Norse cognate ketill. The smaller sense of "tea-kettle" is attested by 1769. A steep, bowl-shaped hollow in ground once covered by a glacier. Kettles are believed to form when a block of ice left by a glacier becomes covered by sediments and later melts, leaving a hollow. They are usually tens of meters deep and up to tens of kilometers in diameter and often contain surface water. a large pot for cooking. The same Hebrew word (dud, "boiling") is rendered also "pot" (Ps. 81:6), "caldron" (2 Chr. 35:13), "basket" (Jer. 24:2). It was used for preparing the peace-offerings (1 Sam. 2:13, 14). in geology, depression in a glacial outwash drift made by the melting of a detached mass of glacial ice that became wholly or partly buried. The occurrence of these stranded ice masses is thought to be the result of gradual accumulation of outwash atop the irregular glacier terminus. Kettles may range in size from 5 m (15 feet) to 13 km (8 miles) in diameter and up to 45 m in depth. When filled with water they are called kettle lakes. Most kettles are circular in shape because melting blocks of ice tend to become rounded; distorted or branching depressions may result from extremely irregular ice masses
The word arthroscopy comes from two Greek words, "arthro" (joint) and "skopein" (to look). The term literally means, "to look within the joint." In an arthroscopic examination, an orthopedic surgeon makes a small incision in the patient's skin and then inserts pencil sized instruments that contain a small lens and lighting system to magnify and illuminate the structures inside the joint. Light is transmitted through fiberoptics to the end of the arthroscope that is inserted into the joint. By attaching the arthroscope to a miniature video camera, the surgeon is able to see the interior of the joint through this very small incision rather than a large incision needed for surgery. The image is magnified up to 20x. The video camera attached to the arthroscope displays the magnified image of the joint on a video monitor, allowing the surgeon to look, for example, throughout the knee (stifle) at cartilage and ligaments, and under the kneecap (patella). The surgeon can determine the amount or type of injury, and then repair or correct the problem, if it is necessary. Diagnosing joint injuries and disease begins with a thorough medical history, physical examination, and usually X-rays. Additional tests such as an MRI, CT scan or ultrasound examination may be needed, as well. Through the arthroscope, a final diagnosis is made which may be more accurate than through "open" surgery (arthrotomy) or from X-ray studies. Disease and injuries can damage bones, cartilage, ligaments, muscles, and tendons. Some of the most frequent conditions found during arthroscopic examinations of the joints in dogs are: - Loose bodies of bone and cartilage: OCD (Osteochondrosis / Osteochondtritis Dissecans) of the knee, shoulder, elbow, ankle (hock) - Inflammation: Acute and Chronic Synovitis - inflamed lining (synovium) in knee (stifle), shoulder, elbow, or hip - Bursitis: Inflammation of a sac-like structure that surrounds ligaments - Shoulder: OCD, inflammation or tears of the bicipital tendon, rotator cuff injuries - Knee: Cranial cruciate ligament tears with instability, meniscal (fibrocartilage) tears, chondromalacia (softening, wearing or injury of cartilage) - Elbow: OCD, UAP and FCP associated with elbow dysplasia - Hip: Tearing of the ligaments or joint capsule, cartilage damage Although the inside of nearly all joints can be viewed with an arthroscope, four joints are most frequently examined with this instrument. These include the hip, knee, shoulder and elbow. As advances are made by engineers in electronic technology and new techniques are developed by orthopedic surgeons, other joints may be treated more frequently in the future.
Climate change has a disconcerting tendency to amplify itself through feedback effects. Melting sea ice exposes dark water, allowing the ocean to soak up more heat. Arctic warming speeds the release of carbon dioxide from permafrost. And, as researchers discussed at a meeting last week in Seefeld, Austria, climate extremes — heatwaves, droughts and storms — can hamper plant growth, weakening a major buffer against the rise of CO2 in the atmosphere. “Heatwaves and droughts will very likely become more frequent in a warmer climate, and ecosystems will somehow respond,” says Philippe Ciais, a carbon-cycle researcher at the Laboratory of Climate and Environmental Sciences in Gif-sur-Yvette, France. “More storms will add an extra dimension to the problem.” The meeting was organized by the CARBO-Extreme project, a €3.3-million (US$4.5-million) collaboration of 27 groups from 12 countries, funded by the European Union. Attendees showed off an array of tools for uncovering how extreme events affect terrestrial carbon cycles, including numerical models, CO2 flux measurements and field experiments. The challenge now, says Ciais, is to predict how the frequency of climate extremes will change, and to model the intricate physiological responses — some of which are poorly understood — of plants and ecosystems. Land plants create a huge carbon ‘sink’ as they suck CO2 out of the air to build leaves, wood and roots. The sink varies from year to year, but on average it soaks up one-quarter of the annual CO2 emissions from the burning of fossil fuels. And events such as droughts, wildfires and storms are likely to “cause a pronounced decline” in the sink, says Markus Reichstein, a carbon-cycle scientist at the Max Planck Institute of Biogeochemistry in Jena, Germany, who coordinates CARBO-Extreme. Climate anomalies have already had a detectable impact. Satellite observations and data from CO2 measurement towers suggest that extreme events reduce plant productivity by an average of 4% in southern Europe and 1% in northern Europe, says Reichstein. That lowers annual carbon uptake by 150 million tons — equivalent to more than 15% of Europe’s annual man-made CO2 emissions. The most extreme events can turn forests and grasslands from carbon sinks to sources. In 2003 alone, a record-breaking heatwave in Europe led to the release of more CO2 than is normally locked up over four years. So far, scientists have detected no increase in extreme weather events. But they expect one. Reindert Haarsma, a climatologist at the Royal Netherlands Meteorological Institute in De Bilt, forecasts a surge in hurricane-strength storms such as 1999’s Lothar, which raged northeastward from the Bay of Biscay, slashing forest biomass by 16 million tons. By the end of this century, model studies suggest, storms similar to Lothar and another that caused huge damage in France in 2009 will become 25 times more common in Europe. The probability of major heatwaves in Europe is expected to increase up to tenfold by mid-century. Lack of water makes plants less capable of fending off pathogens and insects. After the 2003 heatwave, caterpillars devastated Mediterranean oak forests near Montpellier in France. Researchers have presumed that this triggered a large carbon release, but such responses are hard to predict. Severe droughts in 2005 and 2010 in the Amazon basin seem to have released much less CO2 than expected, says Ciais.“In some ecosystems, small disturbances can have a large impact,” he adds. “In others, even significant anomalies seem to cause only little harm.” CARBO-Extreme teams have conducted field experiments that simulated drought in different climates and vegetation types, from Atlantic pine forests to alpine meadows. Unpublished results show that in grasslands, drought markedly slowed photosynthesis, which stores carbon in leaves, roots and soil. It had a smaller effect on soil respiration, which releases carbon, so the net result was a decline in carbon uptake. The experiments also showed that plants and soils keep a ‘memory’ of disturbances, says Michael Bahn, an ecologist at the University of Innsbruck in Austria who oversees a grassland experiment. He simulated a series of droughts, and found that later ones had a larger effect on net carbon release. Existing biosphere models do not capture such effects, which Bahn thinks might be due to changes in soil microbes. Such omissions could lead to a large bias in the models. The world’s soils contain almost 100 gigatons of carbon — twice as much as the entire atmosphere. Just a 10% increase in soil-respiration rates, says Bahn, would release more CO2 in a year than humans pump out.
Lawrence M. Greenberg, MD Professor Emeritus of Psychiatry University of Minnesota Author of the T.O.V.A. Please note that the following information contains the expressed opinions and conclusions of the author and is not intended as, nor may it be used as, medical advice. This information should not replace the clinical decisions of a licensed professional based on personal examination. The author shall have no responsibility for the use or misuse of this information. Attention deficit disorders (ADD) is a descriptive term used by professionals to indicate that a child or an adult has a significant problem maintaining attention (that is, staying on task) when it is reasonable to expect them to be able to do so. There are many causes of inattention, ranging from boredom to neurological (including ADHD) and psychological problems. (See 3 below). Attention Deficit Hyperactivity Disorder (ADHD) is the diagnosis currently used by clinicians to indicate a neurological disorder with three prominent clusters or groupings of problems that can occur separately or together: - inattention and distractibility; - hyperactivity and impulsivity ("disinhibition"); - disorganization or problems of "executive functioning" In this review, the descriptive term, ADD, refers to the presence of one or more of these symptoms, regardless of diagnosis. The diagnostic term, ADHD, refers to the neurological disorder as described in the Diagnostic and Statistical Manual (DSM IV) that is currently used by clinicians. The symptoms of ADHD are grouped into four diagnostic categories based on the manifested symptoms: Inattentive Type, Hyperactive Type, Mixed Type, and Other. Note: The diagnostic terms and the descriptive symptoms change with each new DSM as we learn more about the disorder. As an example, ADHD was previously called the Hyperkinetic Reaction of Childhood before we knew that only half of the children with ADHD are hyperactive, and that 5% of adults have ADHD. This review focuses on the specific symptoms of ADHD (and ADD) that are "targeted" for treatment: In ADHD, the brain often processes information too slowly and too quickly (that is, inconsistently) compared to persons who don't have ADHD. Persons with ADHD have difficulty staying on task and tend to be easily distracted and disorganized. Of course, they can and do compensate somewhat. However, as they get older, the information to be processed gets more complicated, there's a sequence of things to do rather just a few things at a time. Some children do fine until "show and tell" is replaced by primarily verbal or written instructions or material, especially in the mid-grade school years. Others do all right, even if they do work longer and harder than their peers, until high school, or, in some cases, until college when they just can't keep up with the others. Even if the person is "hyperactive" and impulsive (see below) as well as inattentive, the brain processes information inconsistently. It's counterintuitive. We'd expect that a hyperactive person processes information too fast to keep things straight. However, the person with ADHD sometimes processes too slowly and responds with confusion, frustration, and a sense of failure because they can't understand the message or respond appropriately. Half of the children with ADHD simply process information inconsistently, but they aren't hyperactive. Adults with ADHD aren't usually hyperactive even if they were as children- they "outgrow" the hyperactivity component although they often remain physically and/or verbally impulsive. Since children are usually referred to a clinician because they are disruptive and/or disrupted, children who are inattentive but not hyperactive are usually not referred and, instead, are thought to be uninterested, noncompliant, easily bored, or maybe even not too smart. These children often don't get to a clinician, and don't get diagnosed. Instead, they often end up with low self-esteem, being oppositional, and/or favor activities that hold their attention. Case illustration: A medical student asked for a clinical consultation after hearing a lecture on adult ADHD. He'd been diagnosed as a child as having dyslexia (a reading problem) and received special educational services. He did all right in school but had to study much more than his peers, especially in high school and college. Since reading was a problem, reading assignments and tests were particularly difficult for him. He devised all sorts of coping strategies like taking frequent short breaks, and studying at night when it was quiet. He assumed that he was of average ability and attributed his academic progress to working so hard. The clinical assessment revealed that he had the inattentive type of ADHD. There was no evidence of dyslexia (although he may very well have had it as a youngster), and he was actually much smarter than he thought. He responded very well to medication (see 13, below), and is now a successful physician. Persons with ADHD who are hyperactive (that is, overactive) and/or impulsive do not successfully control their behavior (leading to impulsivity and related problems) and/or do not modulate activity level (leading to hyperactivity). It’s like their "brakes" don’t work well- they have difficulty stopping and thinking before they act. They might be physically and/or verbally overactive. As noted above, these are the children who are referred to a clinician because their behavior bothers others- they can be irritable, aggressive, destructive, and just downright obnoxious. Some are just all over the place- they can't sit or stand still for very long. And some are all of the above. Case illustration: His parents have always had difficulty managing Bobby's behavior. As an infant and a baby, he was difficult to settle down with frequently interrupted sleep, colic, and irritability. As a toddler, he was into everything- running into the street, breaking things, and still very irritable. Within days of starting preschool, after his teachers recommended that he be evaluated, it was determined that he had the mixed type of ADHD (both inattentive/distractible and hyperactive/impulsive). He responded nicely to medication (see 13, below), short term individual counseling, and parental consultations to help them manage his behavior more effectively and consistently. Over half of the children with either type of ADHD grow up to be adults with ADHD. If the diagnosis of ADHD (with or without hyperactivity) was missed in childhood, and the person did not "outgrow" the processing problem in the teen years, they can end up with complications of untreated ADHD, including low self-esteem/depression, obsessive-compulsive traits, excessive anxiety ("fear of failure"), antisocial traits, and/or substance abuse, using cocaine, alcohol, methamphetamine, marijuana, and excessive sleep medications. Individuals with untreated ADHD also tend to unconsciously self medicate with excessive amounts of caffeine and nicotine. (Both caffeine and nicotine are psychostimulants. They stimulate the brain. However, they are also very addicting and have some very nasty side effects.) Persons with ADHD often have difficulty "putting it all together". Sequential information is somehow all mixed up or lost when recorded in short term memory. When the person tries to retrieve the information from short-term memory, some of the data are missing and some of the data don't make sense, making it difficult to respond appropriately and correctly assess the results. The person has difficulty organizing themselves- projects are begun and abandoned unfinished. Sometimes their sentences make sense, but their paragraphs don't, literally and figuratively. It helps to be intelligent, and to be able to cope better than others, but people with executive functioning problems can't perform up to their ability even when working much harder than others. Frustrated, they try harder and/or give up. Case illustration: A very successful scientist with a Ph.D. was promoted from a research position to manager of his section. Within days, he was overwhelmed by details and unable to keep organized. He'd always been a hard worker- even in school he studied far more than his peers and obtained good grades. A clinical evaluation revealed a very high IQ and inattentive type of ADHD with prominent executive functioning problems. Fortunately, he responded very well to coaching (focused on acquiring organizational skills and reducing distractions) and to medication (see 12 and 13, below). The term, ADHD, is really a misnomer. It's not really a disorder. By definition, a disorder has certain characteristic symptoms (signs and behaviors that are "abnormal"), a predictable natural history (what happens over time without treatment), and a common underlying cause ("etiology"). Treatment, if any, is directed to modify the symptoms or alter the underlying cause of the disorder. Instead, ADHD is a symptom complex, and the diagnosis is based on the presence of a sufficient number and severity of the symptoms that are listed in the current diagnostic handbook (DSM IV) that clinicians use. However, this exact complex of symptoms has many very different causes (etiologies) that have different natural histories, and respond to very different treatments. There are many possible causes of attention problems, including: a) it's normal, age appropriate behavior that is mislabeled; most of the overly active, difficult-to-manage children don't have ADHD; Case illustration: Sue was a very intelligent, active, intrusive, and somewhat "bossy" six years old girl who was a "management" problem at home and in school. She always wanted to do it herself and didn’t "listen well". Her parents tended to be inconsistent in their behavior management attempts and to be easily irritated by her. Her teacher was boringly repetitive and pedantic. Sue didn’t have ADHD- she was what Linda Budd called "active alert". Perfectly normal. Things improved considerably with some behavior management counseling for the parents and consultation with the teacher. Note: Linda Budd’s books on the active alert child are very, very helpful even if the child does have ADHD. b) any number of general medical problems (such as anemia, hyperthyroidism, chronic ear infections, and dietary inclusions/sensitivities; Clinical comment: Dietary sensitivities do exist although they are not very common. One of our studies done some years ago revealed that only one of twenty children whose ADHD symptoms reportedly "responded" to dietary management did, indeed, respond sufficiently to changes of diet. c) many medications (such as anticonvulsants, antihistamines, and psychodepressants that sedate or slow the brain); Comment: Since these medications are often necessary for the general well being of the person, it’s important to use the lowest effective dose to minimize side effects. d) toxic conditions (drug induced or an illness); e) sensory deficits (like undetected hearing and visual impairments) and sensory hypersensitivities; Comment: The clinician needs to consider all of these potential problems when evaluating attention. f) neurological problems other than ADHD, such as visual and/or auditory distractibility, sleep disturbances (including narcolepsy), epilepsy, "acquired/traumatic" or Traumatic Brain Injury (TBI); Case illustration: A successful professional was seriously injured in an auto accident in which close relatives were killed. He was evaluated by teams of professionals, and, although he'd had a severe concussion, there was no sign of brain damage or memory impairment. His recovery was slow but steady with many surgeries, medications, and rehabilitation interventions. Several years later, he was telling a friend, a psychologist, that in spite of grief counseling, he remained "depressed"- he felt preoccupied and was distractible, frequently off task, disorganized, and easily bored. These are symptoms of depression, and they are also symptoms of ADHD, inattentive type. When his friend referred him for an ADHD assessment, it was discovered that the evaluation obtained after the accident did not include a T.O.V.A. even though brain injuries can cause ADHD. It turned out that he did have traumatic ADHD, and his symptoms responded to treatment. g) family style and (dis)organization (including social and cultural factors); h) lack of school readiness, different learning style, and low motivation; Comment: Some individuals learn best with a "hands on" experience rather than hearing or reading about it. i) stress (including emotional trauma and inappropriate demands); j) intellectual impairment and precocity; k) learning disabilities; l) other psychiatric conditions including abuse/post traumatic stress disorder, psychosis, bipolar or obsessive-compulsive disorders, autism, Tourette, depression, and anxiety; Comment: A multi-faceted clinical evaluation is needed to determine whether one or more of these conditions exist with or without ADHD. m) substance use, abuse, and withdrawal (including caffeine and nicotine); Comment: Substance use and abuse are common in untreated individuals with ADHD, and the co-existence of ADHD makes the treatment of substance abuse more difficult. Although it seems counterintuitive to treat a substrance abuser with ADHD with low doses of psychostimulants (See 13 below), it’s the most effective treatment. n) behavior disorder including oppositional/defiant; Case illustration: Jack was six years old when seen by his family physician because of hyperactivity, impulsivity, stealing, and temper tantrums at home and at school where he was not progressing academically. Assuming that Jack had ADHD, combined type, the doctor prescribed 10 mg of methylphenidate (a psychostimulant). Jack initially appeared to be less hyperactive and impulsive. The dosage was increased to 20 mg with minimal improvement and some increase in irritability and sleep disturbance. Jack was subsequently seen for a psychological evaluation and was diagnosed and successfully treated for a behavior (conduct) disorder without medication. o) and, finally, the neurological disorder of attention or ADHD To complicate matters even further- these causes are not mutually exclusive. An individual with the ADHD symptom complex could very well have more that one cause co-existing (co-morbidity) and needing more than one treatment modality. Prime examples would low self-esteem and depression. In addition, there can be a genetic component as well since a percentage of individuals with ADHD have close relatives with it also. Sometimes co-morbid problems, like low self-esteem, are so prominent that the clinician may not recognize the underlying attention disorder. This is often the case in children with the Inattentive Type of ADHD and in adults when ADHD wasn't diagnosed in childhood. So, it's very important that the clinician carefully considers all of the possible causes of the symptom complex without leaping to a conclusion and prescribing a treatment. Selecting a diagnostician is not an easy task- you want someone who has the necessary expertise. An excellent source of information is The TOVA Company that maintains an up to date directory of clinicians who specialize in the diagnosis and treatment of attention disorders, including ADHD. For free recommendations of clinicians in a particular geographical area, please call 1.800.REF.TOVA (800.733.8082). The symptom complex of ADHD occurs in 7-8% of children and 5% of adults. The number of ADHD diagnoses is definitely increasing, in part reflecting the increased awareness by the general public and professionals alike. Some of the increase is due to assuming that every overly active youngster has ADHD. (See 3 above.) Some of the increase reflects the increasing number of brain injuries from accidents, etc. While we used to think that there were many more males than females with ADHD, we now know that females tend to have the inattentive type of ADHD and are often missed because they're not bothering any one. The same was true for adults- we used to think that all of the children with ADHD "outgrew" it by the mid-teen years. Now we know that only half of them do although the hyperactivity component generally does drop out. Diagnosing ADHD is not an easy process. Perhaps a third of the children referred to us with the diagnosis of ADHD (and sometimes being treated as having ADHD) don't have ADHD. They have the symptom complex but not ADHD. On the other hand, there are at least as many undiagnosed children and adults who have ADHD (especially the inattentive type). The T.O.V.A. is a computerized continuous performance test (CPT) that is used to assess attention and impulsivity. There are two types of T.O.V.A. test: the visual test measures visual information processing, and the auditory measures auditory information processing. Designed like computer games, both T.O.V.A. tests are easy to administer to children (age four and older) as well as adults. The visual T.O.V.A. uses two simple geometric figures to measure attention, and the auditory uses two tones. Unlike other CPTs, the T.O.V.A. avoids the confounding effects of language, cultural differences, learning problems, memory, and processing complex sequences. The visual test target is a square with a second but smaller square inside of it, near the upper border. The nontarget is a square with the smaller square near the lower border. The auditory test uses two easily discriminated notes. The high note is the target, and the low note is the nontarget. That’s it- no complicated sequences of numbers or letters, no confusing colors or sounds. A target or a nontarget randomly flashes on the screen or is sounded every two seconds for a tenth of a second (100 msecs). The instructions are to press a specially designed, accurate microswitch as fast as you can every time a target appears or is heard, but not to press the microswitch when a nontarget appears or is heard. It’s important to be fast but not too fast- it’s just as important to avoid pressing the microswitch when it's a nontarget. It’s that simple. Well, it actually isn’t that simple. The targets and nontargets are presented in two different patterns. In the first half of the test, the target randomly occurs once for every 3.5 nontargets. So the first half of the test is called the infrequent (target) condition. With the visual test you really have to focus on the screen, or you’ll miss the occasional target. With the auditory test, you have to listen carefully, or you'll miss the occasional high note. The excitement (if there is any) wears off very quickly for the first half of the test is 10.8 minutes long. It gets very boring very soon, and that’s what we want- a measure of attention in a boring task. The second half of the test is also 10.8 minutes long, and now the target occurs 3.5 times to every one random nontarget. So it’s called the frequent (target or response) condition. In contrast to the first half, you’re pressing the microswitch most of the time, and every once in a while you have to inhibit the natural tendency to respond because a random nontarget occurs. This half is more exciting than the first half and provides a measure of attention in a stimulating task. Why do we need visual and auditory versions of the T.O.V.A.? Most people are "concordant" for both visual and auditory information processing. That is, they visually and aurally process information similarly whether it be slowly, quickly or in between. However, a significant number (estimated at 12%) of individuals are "discordant" and process visual and auditory information differently. That is, they may be significantly slower in one than in the other modality. So we need to test both visual and auditory processing. a) The consistency of the response times is called Response Time Variability and is measured in milliseconds. Response Time Variability is the most important measure of the T.O.V.A. and tells us how consistent (or inconsistent) a person's Response Time is. b) The time it takes to respond to a target is called Response Time and is measured in milliseconds. This measure tells how fast (or slow) a person processes information and responds by pressing the microswitch. c) d' (d prime) is derived from Signal Detection Theory and measures how quickly one’s performance worsens ( deteriorates ) over the 21.6 minutes of testing. d) When someone responds to the nontarget, it is called a Commission Error, a measure of impulsivity (also called disinhibition). e) When someone does not respond to the target, it is called an Omission Error, a measure of inattention. f) Post-Commission Response Times measure how much faster or slower a person becomes after mistakenly responding to a nontarget. This measure helps us to identify one of the other causes (like conduct disorder) of the symptom complex. g) Multiple Responses are the number of times a person presses the microswitch more than once a target. This measure helps us to identify other neurological conditions. h) Anticipatory Responses measure how often a person presses the microswitch so quickly (<150 msec) that they’re probably guessing rather then waiting longer and being sure. In contrast to other commercially available CPTs that use the computer keyboard or mouse to record responses, the T.O.V.A. uses a microswitch. Since Response Time Variability and Response Time are two very important measures, we need to measure time very accurately to determine how fast and inconsistent Response Times are. Why a microswitch? To obtain very accurate time measurements (±1 msec). Computer keyboards and mouses, are not as reliable and can vary significantly (±28 msec). In addition, if you use a different computer with a different measurement error to retest someone, it's very difficult to compare the results. Once testing is completed (21.6 minutes long for 6 years old and older and 10.8 minutes for 4 and 5 years old), the results are immediately analyzed, and the complete interpretation and graphics are available on the monitor and to be printed out. The T.O.V.A. report compares the test results with the results of a large number of people who do not have an attention problem. The test results are interpreted and reported as within the normal expectable range or not. As the brain matures and changes, it processes information faster and more accurately from childhood to the late teen years/early twenties, remains pretty steady until the early- to mid-sixties when it slows somewhat. (So it is accurate to say that younger adults are faster than older ones, but older ones can compensate by exercising better judgment.) It's also true that males and females process information differently. Thus, age and gender make a difference. For instance, when comparing individuals without ADHD, eight year old boys perform differently than eight year old girls and differently than nine year old boys.
The Second Aliyah, in the wake of pogroms in Czarist Russia and the ensuing eruption of anti-Semitism, had a profound impact on the complexion and development of modern Jewish settlement in Palestine. Most of its members were young people inspired by socialist ideals. Many models and components of the rural settlement enterprise came into being at this time, such as "national farms" where rural settlers were trained; the first kibbutz, Degania (1909); and Ha-Shomer, the first Jewish self-defense organization in Palestine. The Ahuzat Bayit neighborhood, established as a suburb of Jaffa, developed into Tel Aviv, the first modern all-Jewish city. The Hebrew language was revived as a spoken tongue, and Hebrew literature and Hebrew newspapers were published. Political parties were founded and workers' agricultural organizations began to form. These pioneers laid the foundations that were to put the yishuv (the Jewish community) on its course toward an independent state. In all, 40,000 Jews immigrated during this period, but absorption difficulties and the absence of a stable economic base caused nearly half of them to leave. See Also: First Aliyah | Third Aliyah | Fourth Aliyah | Fifth Aliyah | Aliyah Bet
Since the beginning of human civilization, people have settled along rivers and on the fertile deltas created by them. The sediment carried and deposited by mighty rushing waters creates land rich in nutrients and ideal for crops and livestock. Where there are uninhibited rivers, there is new, rich land, and where such resources abound there are people. Watch the video (YouTube) Information about wetlands, land loss processes, and sea level rise in Louisiana. Topics include: - What are Wetlands? - The Importance of Louisiana’s Wetlands - The Importance of the Mississippi River and the Gulf - The Crisis of Wetland Loss - The Causes of Wetland Loss - Economic Effects - More than 35 square miles of valuable wetlands are washed away each year by coastal erosion. - The 62 Coastal Wetlands Planning, Protection, and Restoration Act projects are anticipated to enhance 1,374 square miles of wetlands. - Louisiana has more than three million acres of coastal wetlands. - As much as 16 percent of the nation’s fisheries’ harvests, including shrimp, crabs, crawfish, oysters, and many finfish, come from Louisiana’s coast. - Louisiana provides more fishery landings than any other state in the conterminous United States (more than 1.1 billion pounds/year), and more than 75 percent of Louisiana’s commercially harvested fish and shellfish species are dependent on wetlands. - Louisiana’s wetlands provide habitat for more than five million wintering waterfowl annually. - Louisiana’s wetlands are home to many endangered species. - Economic benefits of Louisiana’s wetlands include: — $30 billion per year in petroleum products. — $7.4 billion per year in natural gas (21 percent of the nation’s supply). — 400 million tons per year of waterborne commerce. — $2.8 billion per year in commercial fishing. — $1.6 billion per year in recreational fishing. — $2.5 million per year in fur harvest (40 percent of the nation’s total). — $40 million per year in alligator harvests. - Louisiana accounts for up to 40 percent of the coastal salt marshes in the contiguous United States and 80 percent of the nation’s coastal wetland loss. Wetlands are among the most important and highly productive ecosystems on earth, and Louisiana is losing them at a rate of 25-35 square miles per year. At this rate, Louisiana could lose another 527,000 acres of coastal wetlands by the year 2050! - Wetland losses in Louisiana are due to a combination of human and natural factors, including subsidence, shoreline erosion, freshwater and sediment deprivation, saltwater intrusion, oil and gas canals, navigation channels, and herbivory. Using detergents to help get rid of spilled oil in marine waters is more harmful to the environment than if the oil had been left alone. In fact, putting soap in the water is against the law and can result in fines of up to $25,000 for each incident. - Approximately 80 percent of earth’s surface is covered with water: one percent is useable freshwater; 97 percent is saltwater; two percent is water frozen in glaciers - Today, the earth has approximately the same amount of water as when it was formed; the earth will not receive additional water. - The water consumed today may have been a drink for a dinosaur. - An average of 168 gallons of water is used per person per day. - In the United States, approximately 25 trillion gallons of freshwater are used each year. - Freshwater is being used faster than groundwater is being recharged. - In the United States, more than 50 percent of the wetlands that recharge and purify groundwater have been destroyed.
02 Jul How to score in A-level economics for JC students? How do I write good introductions? A good economics essay introduction captures the reader’s attention and gives an idea of the essay’s focus. It requires students to define key definitions and list the overview and scope of the question. Clear Definitions: Define the keywords in the question Definitions are important to showcase the student’s level of understanding of the content to the examiners. Therefore, students should make a list of definitions, as they will come in handy when preparing for the exams. Alternatively, students may purchase ready guidebooks, which list the common definitions that they need to memorise in the A-level economics syllabus. Writing an overview If the essay question asks about a specific country or a particular market structure, it is important to also describe some characteristics of the country or the market structure given in the question. For example, if the essay is on the Singapore economy, students need to write something like this: “Singapore is a small and open economy. Its small population size and lack of natural resources will mean that it has a small domestic market and is heavily dependent on trade for growth and survival. Singapore is thus very vulnerable to external shocks, which cause instability to the economy.” Next, if the essay is on a perfectly competitive market, then an appropriate introduction would be as follows: “A perfectly competitive market is characterised by the fact that no single firm has influence on the price of the product it sells. A perfectly competitive market has several distinguishing characteristics: there are many buyers and sellers in the market; the commodity sold is homogeneous; there is free entry and exit from the industry; perfect mobility of factors of production; transport costs are assumed to be negligible; both buyers and sellers are independent in their decision making and there is perfect knowledge.” Scope: State the scope of the question clearly Students need to define the scope of the essay clearly from the beginning, so that they do not go out of point. Tell the examiner the areas that will be discussed in the essay briefly in the introduction so that he can anticipate what is about to come up in the script. Students can use phrases such as: “This essay aims to explain…” to state the scope of the question. How do I write a good body? Topic sentence and Economic Analysis Each paragraph of the body should only contain one key idea, which should be conveyed in the topic sentence. The key idea should be based on economics theories, principles and concepts. An example of a topic sentence is as follows: “In Singapore, the government has encouraged employers to adopt a flexible wage system which would help reduce unemployment during economic downturns.” Diagrams should be drawn whenever appropriate and references must be made to the diagrams (e.g. A rightward shift of the demand curve from DD1 to DD2). The axes should be labelled as specifically as possible. (e.g. Instead of merely labelling price and quantity, the axes could be labelled as Price of Housing and Quantity of Housing respectively). Arrows that depict the shifts of the curves should also be clearly drawn in the diagram. The diagram should be drawn using a pencil and ruler and should preferably take up about one-third of the foolscap paper. Using Contextual Examples Students need to include examples in their essay in order to demonstrate their ability to apply economic theories into real world events. When possible, students should use the context given in the preamble and avoid using hypothetical examples in the essays. In Singapore’s context, an example of a natural monopolist is the Public Utilities Board (PUB), which supplies water. The domestic size of the market is too small to support more than one large firm. The Singapore government adopts market-oriented policies such as manpower policies to upgrade the skills of workers facing the threat of structural unemployment. An example of such a policy is the Skills Redevelopment Programme introduced to retrain displaced workers for employment in the InfoComm sector, Workforce Development Agency (WDA) Workforce Skills Qualification (WSQ) programme to train workers in sector specific skills and job redesign to make jobs more attractive to workers, especially among the older workers. How do I write good evaluation points? The evaluation can be what sets students apart from others if written well. Here are some things that examiners are looking for when reading an evaluation: 1 Recognise underlying assumptions. For example, in dealing with questions on demand and supply, it is important to write about the ceteris paribus assumption and also to give an example of how it can be altered in the short term. i.e. taste and preferences of a consumer may change over time. 2 Consider the time frame: Different policies might have different impact on the economy in the short term versus long term. For example, supply side policies need time to take effect and thus, require a long time frame. 3 Consider the feasibility of the policy: The extent to which a particular policy can be implemented. For example, an expansionary fiscal policy might not be feasible for a country that is facing a huge budget deficit. 4 Consider the effectiveness of the policy implemented and whether it can solve the problem. Students could consider the unique nature of the economy given in the question. For example, an exchange rate policy would be more effective in a small and open economy, rather than a large and less open economy. 5 Consider the desirability of the policy: whether there are any side effects that the policy might have on other economic objectives, i.e. whether there are conflicts of goals. 6 Consider the existing state of the economy: whether the country is currently facing a recession or inflation, and the severity of the problems faced can also affect the main economic priority of the government. What else do students need to take note of? - Always plan the essay before writing. - Ensure that the paper is completed within the allocated time
All four gas giant planets in our solar system have moons orbiting them, but it’s unknown whether that’s true of any of the many gas giant exoplanets that have been discovered orbiting other stars. Researchers have a theory about it, and their concept could also be the mystery player behind other astronomical phenomena. Astronomers have yet to find a confirmed “exomoon,” or a moon outside our solar system, even if they are predicted to form around massive planets. Exomoons are harder to pinpoint than exoplanets because of their smaller size. In 2018, astronomers discovered what could be an exomoon, estimated to be the size of Neptune. It was found in orbit around a gigantic gas planet 8,000 light-years from Earth. But the scientists behind this discovery, hesitant to confirm that the new find is an exomoon due to some of its peculiarities, say more observation is needed. Their findings were published in the journal Science Advances. In a new study soon to be published in the Monthly Notices of the Royal Astronomical Society journal, other researchers modeled the formation of exomoons around gas giant exoplanets. They projected that the massive planets would kick moons out of orbit and send them on their way — or the researchers believe that angular momentum between the giant exoplanet and moon would allow the moon to essentially escape the gravity of the planet. While half would probably be destroyed by this expulsion or a potential collision with the planet or star, the other half are projected to survive. The remnants of the expelled moon would end up circling its star with an eccentric orbit similar to Pluto’s. Pluto has an angled, elliptical orbit on a different plane than the rest of the planets in our solar system. It takes 248 Earth years to complete one full orbit of the sun. The researchers have dubbed these rogue exomoons “ploonets.” Many of the early exoplanets discovered are so-called hot Jupiters, gas giant exoplanets that are closer to their stars than Jupiter is to its own, and warmer. These were common discoveries during the early days of exoplanet hunting because they were easy to find, but they represent only about 1% of known exoplanets now. And research suggests that some of them should have large moons. But if they were ejected from orbit, that would explain why exomoons are missing from detection. Instead, the moons are basically on their own. “These moons would become planetary embryos, or even fully-fledged planets, with highly eccentric orbits of their own,” said study author Jaime Alvarado-Montes of Macquarie University in Australia. “The strange changes in [Tabby’s Star’s] light intensity have been observed for years, but are still not understood. Ploonets could be the answer,” Alvarado-Montes said. But actual evidence of ploonets remains elusive. It could be that they deteriorate quickly after escaping their planets’ orbit and can’t be observed. “If the timescales are large enough, we could have real chances to detect them in the near and middle future,” the researchers wrote in the study.
Many species of animals have elaborate sexual performances to attract mates. These complicated mating displays are often observable in multiple sensory modalities. Many species of male birds of paradise, for example, sing and dance in the most lavishly dressed feathers. The females then use their visual and auditory systems to judge and select the best song and dance. Although singing birds are a familiar day-time example of mating displays, many other animals sing too – sometimes at night. During the evening, many species of frogs form large aggregations from which to call for mates. The túngara frog (Physalaemus pustulosus), for example, is a neotropical frog that gathers in pools of water. Many male calls overlap as they vie for nearby females. With such abundant competition, males must continually call if they hope to find a mate. This repeated calling is energetically very taxing. These males, however, don’t get out of breath. Instead, they have evolved a large vocal sac that allows them to recycle air. The air passes from their lungs, over their vocal cords and into the vocal sac, which expands like a balloon from the throat. Then, the elasticity of the vocal sac rebounds, and the same air is forced back into the lungs of the frogs. This means that frogs don’t have to waste time and energy filling their lungs with new air before each call. This also allows males to call more, which makes them more attractive to females. Females, however, are not the only ones listening. Frog-eating bats (Trachops cirrhosus), are sit-and-wait predators that eavesdrop on túngara frog mating choruses. The bats often roost in trees until they hear a calling túngara frog. Once they have located their prey, they take flight and use echolocation to navigate through the forest toward the frog. While bat predators continue to use the male frog call as a cue throughout the hunt, it turns out that the bats can also spot the moving vocal sac of the frog via echolocation. When bats can perceive both the mating call (via what is known as passive listening) and the vocal sac cue (via echolocation), they can more effectively find their prey. A research team from the Smithsonian Tropical Research Institute used robotic frog models to understand how these frog-eating bats use their sensory systems to find their túngara frog prey. What was found is that these bats actually prefer to attack frogs when the vocal sac is moving, likely because it is easier to find and pinpoint prey with this additional cue. This makes the vocal sac more costly for the male frogs than previously thought. Although female frogs sometimes use the vocal sac as a visual cue to find mates, they are not as attracted to it as the bats are. So, while calling males are able to recycle air with their vocal sacs, enabling them to call more often to females, bats often make them pay with their lives. These findings are described in the article entitled Multimodal weighting differences by bats and their prey: probing natural selection pressures on sexually selected traits, recently published in the journal Animal Behaviour. This work was conducted by Dylan G. E. Gomes, currently at Boise State University, W. Halfwerk from VU University, Amsterdam, R. C. Taylor from Salisbury University, M. J. Ryan from the University of Texas Austin, and R. A. Page from The Smithsonian Tropical Research Institute.
Patience can only last so long, especially when the patience is running out because one has had enough. For example, As much as the United States wanted to stay neutral during World War I, they could only tolerate so much. America thought there was no reason to join either sides, and plus they didn’t have a clue on who would win. By setting an example of peace to the world, America stuck with their choice of staying neutral. Little by little, the Germans pushed America to their limit, causing them to go to war. Incidents such as submarine warfare, Wilson being reelected, and the Zimmermann note all led to the “sinking” of America’s patience, causing them to enter the war. As the WWI progressed, Great Britain began to make more use of their naval strength by setting up blockades along the coast of Germany to prevent weapons and other military goods from getting through. This led to the starvation of about 750,000 Germans. In response, Germany set up a submarine blockade, which meant any ship found in the water around Britain, would be sunk without any warning. With Great Britain being one of America’s allies, when the worst disaster occurred, the sinking of the Lusitania, the American opinion toward Germany became very negative. The sinking of the Lusitania killed 1,198 people, including 128 Americans.. Now with Germans actually taking American lives away, America became very angry with Germany. Americans felt that they should not have to sit and watch their people being killed when this was an European war, one in which they agreed to stay neutral. This thought led America one step closer to entering war, but still America waited. After a very long election, a year later, in 1916, the man that “kept us out of war”, Woodrow Wilson, was reelected. It was then that Wilson tried to end the war by attempting to get both sides of the war to agree on terms in which they would be willing to stop fighting for. This meant that the fighting would end, and there would be no winner. However, Wilson’s plan also meant that all nations would join in a League of Peace to sustain international peace and cooperation. Germany did not agree but instead felt they had a good chance to take over Great Britain by going back to their original plan of submarine warfare. German announced that they would be sinking all ships in British waters, whether they were enemies or neutral. Wilson was shocked that the German would now start to sink their ships, the government and him knew that their country was gradually moving closer to entering the war, but made the decision to continue on waiting. After Wilson failed to gain peace, British agents gained possession of a secret telegram from the Germans to the Mexicans. This telegram questioned Mexico if they would form an alliance with Germany, promising that if war with the U.S. broke out, that Germany would protect and support Mexico. Now with Germany turning against America by sinking their ships and trying to get other countries against America, this telegram added to all the reasons the U.S. needed to take part in the war. Wilson and his country had no choice but to take action. It was now that America felt they needed to enter the war in order to obtain a better future in regards to their peace and freedom. America’s patience year by year was slowly “sinking”, especially with Germany keeping a close eye on them. With the U.S. on their way to war, the major events in which pushed them into war would not be forgotten. The submarine warfare, the reelection of Wilson, and the Zimmerman note are left in the history of America’s cause for going to war. Help Us Fix his Smile with Your Old Essays, It Takes Seconds! -We are looking for previous essays, labs and assignments that you aced!-We will review and post them on our website. -Ad revenue is used to support children in developing nations. -We help pay for cleft palate repair surgeries through Operation Smile and Smile Train.
An Introduction To Philosophy: Knowledge, God, Mind and Morality, 1st Edition Digital teaching aids may be available for this title. All instructor requests are reviewed by our team before the files are made accessible. Ohreen’s An Introduction to Philosophy is a one-semester anthology intended to bring the relevance of philosophical issues to light for students in interesting and important ways. Reading original philosophical work can be arduous for the beginner student. An Introduction to Philosophy provides historical and contemporary readings that are easy to understand and of high philosophical quality. The articles have been edited to ensure students get the most salient philosophical ideas without having to read superfluous details. Each chapter starts with a comprehensive introduction or commentary on the readings, setting out the main philosophical themes and concepts. The text is intentionally structured to give students contrasting and critical views regarding knowledge, god, mind, and morality. In this ground-up Canadian text, students receive a unique set of readings focusing on five core issues in philosophy: What is the value of philosophy? Does God exist? What can we know? How does the mind relate to the body? And, what is morally right and wrong? The readings have been selected to focus on philosophical depth, not breadth, regarding these issues. Canadian context has been included where appropriate. Moreover, the total number and size of readings has been reduced, in comparison to other texts, to maximize text usage for students. An Introduction to Philosophy has been developed to get students thinking, philosophically, about the world in which they live. - Shorter Readings: Many anthologies have articles that are too technical for beginning students. The articles have been edited to ensure students get the most salient philosophical ideas without having to read superfluous details. - A "What Do You Think?" box, included in each chapter is intended to help students form their own opinions and also draw out class discussion. - Detailed Chapter Introductions provide context for the readings. - Exercises for Group and Class Discussion: Discussion questions are included in each chapter sub-section to facilitate class or group discussion - Biographical author sketches and photos accompany each reading to help students identify with the writers. - Modern Issues linked to Classical Theory: Where appropriate, ancient sources are juxtaposed with accessible contemporary sources to illustrate how an idea will survive and change. Table of Contents - Chapter 1: The Value of Philosophy the Purpose of Philosophy - Plato - The Apology (selection) - Bertrand Russell - The Value of Philosophy - Chapter 2: Ways of Knowing Scepticism and Rationalism - Rene Descartes - Meditations on First Philosophy (First and Second Mediations) - John Locke - Essay Concerning Human Understanding (selection) - George Berkeley - A Treatise Concerning the Principles of Human Understanding (selection) - David Hume - Enquiries Concerning Human Understanding (selection) - On Certainty - Georg Hendrik von Wright - Wittgenstein on Certainty - Feminist Epistemology - Lorraine Code - Is the Sex of the Knower Epistemologically Significant? - Chapter 3: The Existence of God The Ontological Argument - St. Anselm - Proslogium (selection) - Yaeger Hudson - Problems and Possibilities for the Ontological Argument - The Cosmological Argument - St. Thomas Aquinas - Summa Theologiae (selection) - Theodore Schick Jr. - The ?Big Bang? Argument for the Existence of God. - The Teleological (Design) Argument - William Paley - Natural Theology (selection) - David Hume - Dialogues Concerning Natural Religion (selection) - Richard Dawkins - The Improbability of God - The Problem of Evil - William Rowe - The Problem of Evil - Belief and Faith - Simon Blackburn - Infini?Rien - Natalie Angier - Im No Believer - Chapter 4: The Mind/Body Problem Dualism - Rene Descartes - Meditations on First Philosophy (Sixth Mediation) - Patricia Churchland - Substance Dualism - Identity Theory - William Lyons - Nothing but the Brain - Jerry Fodor - Materialism - John Searle - Can Computers Think? - Eliminative Materialism - Paul Churchland - Eliminative Materialism - Chapter 5: Morality: Searching for Right and Wrong God and Morality - Plato - Euthyphro - Ethical Relativism - James Rachels - The Challenge of Cultural Relativism - John Stuart Mill - Utilitarianism (selection) - Richard Brandt - Moral Obligation and General Welfare - Deontological Ethics - Immanuel Kant - Groundwork for the Metaphysics of Morals (selection) - Joshua Glasgow - Kants Principle of Universal Law - Feminist Ethics - Alison Jaggar - Feminist Ethics
What is pollination? Pollination is the first step in the reproductive process of plants. It happens when small grains of pollen are transferred between the male (anther) and female (stigma) parts of a flower. Since plants are rooted in place, they rely on wind, water, or animals to move their pollen between flowers, which in turn creates seeds that bring forth new plants. Pollinators help plants reproduce. Over 80% of the world’s flowering plants rely on a pollinator – an insect, bird, or other animal – to reproduce. Critters that help transfer pollen include bees, bats, butterflies, hummingbirds, beetles, ants, and many other animals Pollinators add value for people and wildlife. Pollinators play an invaluable role in producing the plants that feed people and many of the Earth’s animals. One out of every three bites of food we eat is created with help from pollinators including chocolate, coffee, nuts, and spices. Pollinators also play an important role in boosting yields on working agricultural lands. Their ecological service is valued at $200 billion each year. This includes their important role in generating more profitable yields on America’s working agricultural lands, too. How do plants attract pollinators? Flowering plants have co-evolved with pollinators to recruit the help of specific species using a combination of shape, scent, and color. For instance, butterflies are lured toward bright, sweet-smelling purple or red flowers, while beetles are drawn to dull-colored white or green flowers. In return for helping out the plant, a pollinator is rewarded with a meal of energy-rich nectar or protein-rich pollen. They also use flowers as shelter, to find mates, or to build nests. Bees pollinate most of our fresh food. More than 4,000 native bee species buzz around the United States. Honey bees alone pollinate 80% of all flowering plants, including more than 130 types of fruits and vegetables. Since they are easy to capture, bees can also serve as tell-tales of ecosystem health. Pollinators are a key part of the ecosystem. Beyond moving pollen around, pollinators also contribute to healthy soils because they foster diverse plant communities. Plus, they are a key part of the food web. Over 85% of birds that breed in the U.S. eat insects, including sage grouse and prairie chickens. Of course, sage grouse and prairie chickens also eat the flowers (called forbs by scientists) that pollinators help produce, making pollinators even more important to these species. Pollinators and sustainable ranching go hand in hand. A recent study from Montana State University found that sagebrush rangelands enrolled in rest-rotation grazing plans through the NRCS Sage Grouse Initiative produced better habitat for native pollinators like bees than pastures with no livestock grazing. Similarly, another study showed that rangelands with sustainably managed cattle grazing had a higher abundance of the types of insects that sage grouse chicks eat than nearby un-grazed land. We can all help pollinators recover. Pollinator populations are dropping alarmingly across North America due to habitat loss, disease, parasites, and environmental contaminants. For instance, the number of monarchs—the familiar orange-and-black butterfly known for its annual migrations—decreased from one billion to 34 million butterflies since 1995 – just 25 years. Luckily, private landowners are stepping up across the country to protect habitat for pollinators. Through the Farm Bill, NRCS offers dozens of conservation activities that benefit both pollinators and agricultural producers by producing healthy, high-value nectar plants. You may also like The role corn plays for gamebirds and economies ac... Sportsmen’s conservation policy issues from publ...
By Gisela Sepulveda Ever heard of diatom? No? You're not alone. These beautifully patterned, unicellular phytoplankton (a group of algae) are often overlooked, literally. Diatoms are small (and when I say small I mean microscopic) with the cells ranging from 2 to 500 microns (or 0.002-0.5 mm). However, although they are small they have a massive impact on our lives. The Art of Diatoms Diatoms live in aqueous environments; from rivers and oceans to bogs and damp rock surfaces. They are nature's own artwork coming in a range of weird and wonderful geometric shapes. They have even been used in art, an example of this is Klaus Kemp's work which uses a beautiful arrangement of diatoms to create a large variety of patterns. Kemp has revived this Victorian era art form using updated techniques and microscopes to arrange diatoms on slides, creating a stunning kaleidoscope of these phytoplankton. Diatom art has also seen a revival in the recent documentary by Matthew Killip, who has been able to capture these inspiring patterns of natures work. Thomas Comber continued the examination of diatoms developing a large collection of slides, bottles and notes that can be viewed online and at the Natural History Museum. As a young man he took up microscopy, travelling far and wide from collecting diatom samples from a variety of aqueous environments. In a recent volunteer programme - Making the Invisible Visible - Thomas Comber's diatom specimens were set up in the specimen preparation area of the Darwin Centre at the Natural History Museum to convert each of these slides and notes into an online database for worldwide use. Diatoms are more than just nature's mobile art gallery, playing an important role in helping to predict climate change. They are used in palaeoclimatology, the study of past climates, as they are great environmental indicators, being very sensitive to environmental changes and ecological conditions. As they have silica cell walls, which are deposited and preserved in sediments, they can record past changes in climatic environments which can actually be measured and studied, a useful trope in predicting future climatic changes. They can indicate sea temperatures, acidification levels, river quality, the amount of oxygen or carbon in the atmosphere and much more. They don't merely act as beautiful indicators of change, they impact on it also. As diatoms are phytoplankton they use photosynthesis to live. This means they produce oxygen, in fact, they contribute to the production around ¼ of the oxygen we breathe. As they take in carbon dioxide from the ocean, infiltrated from the atmosphere, they are also key players in carbon fixation. They can even fix the same amount of carbon (per day) as a forest of plants. So you can breathe easy now thanks to our small friends! You may be surprised to know that the diversity of uses of diatoms or diatomite (a white silica rich mineral). It can be found in everyday items from nail polish and paint, to insecticides and fertilisers. Alfred Nobel would not have been able to create dynamite without them, the cats eye road markings are lit up by the reflecting diatom shell and that nice glass of wine at the end of the day was purified by diatoms. What pearly whites you have, thank diatoms! The silica from diatom cell walls has mild abrasive properties due to which they are sometimes used in whitening tooth pastes. They can even be used in nanotechnology, swimming pool filters and are useful in forensics as well. So don't overlook our friends; The Diatoms
Afghanistan Colonial Records |Afghanistan Wiki Topics| |Local Research Resources| British Colonization (1838-1919)[edit | edit source] In 1838, the British marched into Afghanistan and arrested Dost Mohammad, sent him into exile in India and replaced him with the previous ruler, Shah Shuja. Following an uprising, the 1842 retreat from Kabul of British-Indian forces and the annihilation of Elphinstone's army, and the Battle of Kabul that led to its recapture, the British placed Dost Mohammad Khan back into power and withdrew their military forces from Afghanistan. With the signing of the Treaty of Rawalpindi on 19 August 1919, King Amanullah Khan declared Afghanistan a sovereign and fully independent state. |Record collection||Years covered||Record type||Language||Who is in the records| References[edit | edit source] - Wikipedia contributors, "Afghanistan," in Wikipedia: the Free Encyclopedia, https://en.wikipedia.org/wiki/Afghanistan#Barakzai_dynasty_and_British_wars, accessed 18 November 2020.
Speaking of solar energy is commonly understood as the electric or thermal energy obtained by exploiting the rays of the sun that, every day, affect the Earth. The energy with which the sun radiates our planet is enormous , even if it is partly absorbed by the atmosphere and partly reflected by clouds. In addition to absorbing a certain percentage of radiation, the atmosphere modifies and alters its spectrum, but the solar radiations that reach the Earth determine the process of chlorophyll photosynthesis , and make the life of animals and plants possible. Considering the energy potential that is poured on the earth’s surface every day, some methods have been studied to take advantage of it. Obviously, precisely because of the dispersion and the vastness of the earth’s surface, it is impossible to convert all the energy received from the sun in a useful way. However, the various technologies allow excellent results, both in terms of efficiency and in terms of the environment. A system to exploit the energy of the sun is the installation of solar collectors (or panels) : these are devices that convert solar radiation into thermal energy, useful for heating or for the production of hot water. This type of system can be natural circulation (without electric pumps), taking advantage of the heat characteristic to rise upwards to favor the flow of the heat-carrying liquid in the exchanger, or forced circulation, with the presence of electric pumps speed up the process. Forced circulation, of course, increases the efficiency of the panel, and is necessary in the case of rigid external temperatures, or to produce hot water in the absence of sun and during the night. The solar panels at a concentration acting through a series of reflecting mirrors, which concentrate the sun’s rays directing toward the heat transfer fluid, generating the steam used to produce electricity. Compared to solar collectors, concentration systems offer higher yields at reduced costs. This is a system that has recently been used for the operation of solar power plants: thanks to the presence of hundreds of mirrors, the heat generated moves the turbines and produces huge quantities of electricity . The photovoltaic panels represent the solution for the production of electricity in residential and business environments, with the advantage of a strong reduction in costs with a minimum investment. The plant is made up of panels made of silicon cells which, if hit by the sun’s rays, are able to produce electricity. Placed so as to be constantly reached by sunlight, the photovoltaic panels provide the supply of electricity, varying according to position, irradiance, temperature and other parameters. Solar energy is in any case a valid alternative way to provide the hot water and electricity needs and to heat the environment with excellent results and considerable savings. The photovoltaic system can be autonomous, allowing the storage of the energy produced and then deliver it at a later time, or connected to the electricity grid: in this case, the energy produced by the plant is “sold” to the grid operator , which in some cases proceeds to count it on credit.