text
stringlengths 235
3.08k
|
---|
b r2 R2, the explained variance r, the correlation coefficient r TI-83: Do steps 1-3, then enter the x list and y list separated by a comma, e.g. LinReg(a+bx) L1, L2, then hit ENTER. 446 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION WHAT TO DO IF R2R2R2 AND rrr DO NOT SHOW UP ON A TI-83/84 If r2 and r do now show up when doing STAT, CALC, LinReg, the diagnostics must be turned on. This only needs to be once and the diagnostics will remain on. 1. Hit 2ND 0 (i.e. CATALOG). 2. Scroll down until the arrow points at DiagnosticOn. 3. Hit ENTER and ENTER again. The screen should now say: DiagnosticOn Done WHAT TO DO IF A TI-83/84 RETURNS: ERR: DIM MISMATCH This error means that the lists, generally L1 and L2, do not have the same length. 1. Choose 1:Quit. 2. Choose STAT, Edit and make sure that the lists have the same number of entries. CASIO FX-9750GII: FINDING aaa, bbb, R2R2R2, AND rrr FOR A LINEAR MODEL 1. Navigate to STAT (MENU button, then hit the 2 button or select STAT). 2. Enter the x and y data into 2 separate lists, e.g. x values in List 1 and y values in List 2. Observation ordering should be the same in the two lists. For example, if (5, 4) is the second observation, then the second value in the x list should be 5 and the second value in the y list should be 4. 3. Navigate to CALC (F2) and then SET (F6) to set the regression context. • To change the 2Var XList, navigate to it, select List (F1), and enter the proper list number. Similarly, set 2Var YList to the proper list. 4. Hit EXIT. 5. Select REG (F3), X (F1), and a+bx (F2), which returns: a, the y-intercept of the best fit line b, the slope of the best �
|
�t line r, the correlation coefficient R2, the explained variance a b r r2 MSe Mean squared error, which you can ignore If you select ax+b (F1), the a and b meanings will be reversed. 8.2. FITTING A LINE BY LEAST SQUARES REGRESSION 447 GUIDED PRACTICE 8.22 The data set loan50, introduced in Chapter 1, contains information on randomly sampled loans offered through Lending Club. A subset of the data matrix is shown in Figure 8.18. Use a calculator to find the equation of the least squares regression line for predicting loan amount from total income based on this subset.16 total income 59000 60000 75000 75000 254000 67000 28800 loan amount 22000 6000 25000 6000 25000 6400 3000 1 2 3 4 5 6 7 Figure 8.18: Sample of data from loan50. 16a = 6497 and b = 0.0774, therefore ˆy = 6497 + 0.0774x. 448 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION 8.2.7 Types of outliers in linear regression Outliers in regression are observations that fall far from the “cloud” of points. These points are especially important because they can have a strong influence on the least squares line. EXAMPLE 8.23 There are six plots shown in Figure 8.19 along with the least squares line and residual plots. For each scatterplot and residual plot pair, identify any obvious outliers and note how they influence the least squares line. Recall that an outlier is any point that doesn’t appear to belong with the vast majority of the other points. (1) There is one outlier far from the other points, though it only appears to slightly influence the line. (2) There is one outlier on the right, though it is quite close to the least squares line, which suggests it wasn’t very influential. (3) There is one point far away from the cloud, and this outlier appears to pull the least squares line up on the right; examine how the line around the primary cloud doesn’t appear to fit very well. (4) There is a primary cloud and then a small secondary cloud of
|
four outliers. The secondary cloud appears to be influencing the line somewhat strongly, making the least squares line fit poorly almost everywhere. There might be an interesting explanation for the dual clouds, which is something that could be investigated. (5) There is no obvious trend in the main cloud of points and the outlier on the right appears to largely control the slope of the least squares line. (6) There is one outlier far from the cloud, however, it falls quite close to the least squares line and does not appear to be very influential. Examine the residual plots in Figure 8.19. You will probably find that there is some trend in the main clouds of (3) and (4). In these cases, the outliers influenced the slope of the least squares lines. In (5), data with no clear trend were assigned a line with a large trend simply due to one outlier (!). LEVERAGE Points that fall horizontally away from the center of the cloud tend to pull harder on the line, so we call them points with high leverage. Points that fall horizontally far from the line are points of high leverage; these points can strongly influence the slope of the least squares line. If one of these high leverage points does appear to actually invoke its influence on the slope of the line – as in cases (3), (4), and (5) of Example 8.23 – then we call it an influential point. Usually we can say a point is influential if, had we fitted the line without it, the influential point would have been unusually far from the least squares line. It is tempting to remove outliers. Don’t do this without a very good reason. Models that ignore exceptional (and interesting) cases often perform poorly. For instance, if a financial firm ignored the largest market swings – the “outliers” – they would soon go bankrupt by making poorly thought-out investments. DON’T IGNORE OUTLIERS WHEN FITTING A FINAL MODEL If there are outliers in the data, they should not be removed or ignored without a good reason. Whatever final model is fit to the data would not be very helpful if it ignores the most exceptional cases
|
. 8.2. FITTING A LINE BY LEAST SQUARES REGRESSION 449 Figure 8.19: Six plots, each with a least squares line and residual plot. All data sets have at least one outlier. (1)(2)(3)(4)(5)(6) 450 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION 8.2.8 Categorical predictors with two levels (special topic) Categorical variables are also useful in predicting outcomes. Here we consider a categorical predictor with two levels (recall that a level is the same as a category). We’ll consider eBay auctions for a video game, Mario Kart for the Nintendo Wii, where both the total price of the auction and the condition of the game were recorded. Here we want to predict total price based on game condition, which takes values used and new. A plot of the auction data is shown in Figure 8.20. Figure 8.20: Total auction prices for the game Mario Kart, divided into used (x = 0) and new (x = 1) condition games with the least squares regression line shown. To incorporate the game condition variable into a regression equation, we must convert the categories into a numerical form. We will do so using an indicator variable called cond new, which takes value 1 when the game is new and 0 when the game is used. Using this indicator variable, the linear model may be written as price = α + β × cond new The fitted model is summarized in Figure 8.21, and the model with its parameter estimates is given as price = 42.87 + 10.90 × cond new For categorical predictors with two levels, the linearity assumption will always be satisfied. However, we must evaluate whether the residuals in each group are approximately normal with equal variance. Based on Figure 8.20, both of these conditions are reasonably satisfied. (Intercept) cond new Estimate 42.87 10.90 Std. Error 0.81 1.26 t value Pr(>|t|) 0.0000 0.0000 52.67 8.66 Figure 8.21: Least squares regression summary for the Mario Kart data. EXAMPLE 8.24 Interpret the two parameters estimated in the model for the price of Mario Kart in eBay auctions. The intercept is the estimated price when cond new takes value 0, i.e. when the game is in
|
used condition. That is, the average selling price of a used version of the game is $42.87. The slope indicates that, on average, new games sell for about $10.90 more than used games. INTERPRETING MODEL ESTIMATES FOR CATEGORICAL PREDICTORS. The estimated intercept is the value of the response variable for the first category (i.e. the category corresponding to an indicator value of 0). The estimated slope is the average change in the response variable between the two categories. 0(used)1(new)3040506070Total Priceprice = 42.87 + 10.90 cond_new 8.2. FITTING A LINE BY LEAST SQUARES REGRESSION 451 Section summary • We define the best fit line as the line that minimizes the sum of the squared residuals (errors) about the line. That is, we find the line that minimizes (y1 − ˆy1)2 +(y2 − ˆy2)2 +· · ·+(yn − ˆyn)2 = (yi − ˆyi)2. We call this line the least squares regression line. • We write the least squares regression line in the form: ˆy = a + bx, and we can calculate a and b based on the summary statistics as follows: b = r sy sx and a = ¯y − b¯x. • Interpreting the slope and y-intercept of a linear model – The slope, b, describes the average increase or decrease in the y variable if the explanatory variable x is one unit larger. – The y-intercept, a, describes the average or predicted outcome of y if x = 0. The linear model must be valid all the way to x = 0 for this to make sense, which in many applications is not the case. • Two important considerations about the regression line – The regression line provides estimates or predictions, not actual values. It is important to know how large s, the standard deviation of the residuals, is in order to know about how much error to expect in these predictions. – The regression line estimates are only reasonable within the domain of the data. Predicting y for x values that are outside the domain, known as extrapolation, is unreliable and may produce ridiculous results. • Using R2 to assess the �
|
��t of the model – R2, called R-squared or the explained variance, is a measure of how well the model explains or fits the data. R2 is always between 0 and 1, inclusive, or between 0% and 100%, inclusive. The higher the value of R2, the better the model “fits” the data. – The R2 for a linear model describes the proportion of variation in the y variable that is explained by the regression line. – R2 applies to any type of model, not just a linear model, and can be used to compare the fit among various models. √ √ – The correlation r = − R2. The value of R2 is always positive and cannot tell us the direction of the association. If finding r based on R2, make sure to use either the scatterplot or the slope of the regression line to determine the sign of r. R2 or r = • When a residual plot of the data appears as a random cloud of points, a linear model is generally appropriate. If a residual plot of the data has any type of pattern or curvature, such as a ∪-shape, a linear model is not appropriate. • Outliers in regression are observations that fall far from the “cloud” of points. • An influential point is a point that has a big effect or pull on the slope of the regression line. Points that are outliers in the x direction will have more pull on the slope of the regression line and are more likely to be influential points. 452 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION Exercises 8.17 Units of regression. Consider a regression predicting weight (kg) from height (cm) for a sample of adult males. What are the units of the correlation coefficient, the intercept, and the slope? 8.18 Which is higher? Determine if I or II is higher or if they are equal. Explain your reasoning. For a regression line, the uncertainty associated with the slope estimate, b, is higher when I. there is a lot of scatter around the regression line or II. there is very little scatter around the regression line 8.19 Over-under, Part I. Suppose we fit a regression line to predict the shelf life of an apple based on its weight
|
. For a particular apple, we predict the shelf life to be 4.6 days. The apple’s residual is -0.6 days. Did we over or under estimate the shelf-life of the apple? Explain your reasoning. 8.20 Over-under, Part II. Suppose we fit a regression line to predict the number of incidents of skin cancer per 1,000 people from the number of sunny days in a year. For a particular year, we predict the incidence of skin cancer to be 1.5 per 1,000 people, and the residual for this year is 0.5. Did we over or under estimate the incidence of skin cancer? Explain your reasoning. 8.21 Tourism spending. The Association of Turkish Travel Agencies reports the number of foreign tourists visiting Turkey and tourist spending by year.17 Three plots are provided: scatterplot showing the relationship between these two variables along with the least squares fit, residuals plot, and histogram of residuals. (a) Describe the relationship between number of tourists and spending. (b) What are the explanatory and response variables? (c) Why might we want to fit a regression line to these data? (d) Do the data meet the conditions required for fitting a least squares line? In addition to the scatterplot, use the residual plot and histogram to answer this question. 17Association of Turkish Travel Agencies, Foreign Visitors Figure & Tourist Spendings By Years. 050001500025000050001000015000Number of tourists (thousands)Spending (million $)Number of tourists (thousands)Residuals050001500025000−100001000Residuals−1500−7500750150001020 8.2. FITTING A LINE BY LEAST SQUARES REGRESSION 453 8.22 Nutrition at Starbucks, Part I. The scatterplot below shows the relationship between the number of calories and amount of carbohydrates (in grams) Starbucks food menu items contain.18 Since Starbucks only lists the number of calories on the display items, we are interested in predicting the amount of carbs a menu item has based on its calorie content. (a) Describe the relationship between number of calories and amount of carbohydrates (in grams) that Starbucks food menu items contain. (b) In this scenario, what are the explanatory and response variables? (c) Why might we want to fit a regression
|
line to these data? (d) Do these data meet the conditions required for fitting a least squares line? Exercise 8.11 introduces data on the Coast Starlight Amtrak train 8.23 The Coast Starlight, Part II. that runs from Seattle to Los Angeles. The mean travel time from one stop to the next on the Coast Starlight is 129 mins, with a standard deviation of 113 minutes. The mean distance traveled from one stop to the next is 108 miles with a standard deviation of 99 miles. The correlation between travel time and distance is 0.636. (a) Write the equation of the regression line for predicting travel time. (b) Interpret the slope and the intercept in this context. (c) Calculate R2 of the regression line for predicting travel time from distance traveled for the Coast Starlight, and interpret R2 in the context of the application. (d) The distance between Santa Barbara and Los Angeles is 103 miles. Use the model to estimate the time it takes for the Starlight to travel between these two cities. (e) It actually takes the Coast Starlight about 168 mins to travel from Santa Barbara to Los Angeles. Calculate the residual and explain the meaning of this residual value. (f) Suppose Amtrak is considering adding a stop to the Coast Starlight 500 miles away from Los Angeles. Would it be appropriate to use this linear model to predict the travel time from Los Angeles to this point? 8.24 Body measurements, Part III. Exercise 8.13 introduces data on shoulder girth and height of a group of individuals. The mean shoulder girth is 107.20 cm with a standard deviation of 10.37 cm. The mean height is 171.14 cm with a standard deviation of 9.41 cm. The correlation between height and shoulder girth is 0.67. (a) Write the equation of the regression line for predicting height. (b) Interpret the slope and the intercept in this context. (c) Calculate R2 of the regression line for predicting height from shoulder girth, and interpret it in the context of the application. (d) A randomly selected student from your class has a shoulder girth of 100 cm. Predict the height of this student using the model. (e) The student from part (d) is 160 cm tall. Calculate the residual, and explain what this residual means. (f) A one year old has a shoulder girth of 56 cm. Would it be appropriate to use this linear model to
|
predict the height of this child? 18Source: Starbucks.com, collected on March 10, 2011, www.starbucks.com/menu/nutrition. CaloriesCarbs (grams)10020030040050020406080CaloriesResiduals100200300400500−20020Residuals−40−20020400510152025 454 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION 8.25 Murders and poverty, Part I. per million from percentage living in poverty in a random sample of 20 metropolitan areas. The following regression output is for predicting annual murders s = Pr(>|t|) 0.001 0.000 adj = 68.89% R2 (Intercept) poverty% 5.512 Estimate -29.901 2.559 Std. Error 7.789 0.390 t value -3.839 6.562 R2 = 70.52% (a) Write out the linear model. (b) Interpret the intercept. (c) Interpret the slope. (d) Interpret R2. (e) Calculate the correlation coefficient. 8.26 Cats, Part I. The following regression output is for predicting the heart weight (in g) of cats from their body weight (in kg). The coefficients are estimated using a dataset of 144 domestic cats. s = Pr(>|t|) 0.607 0.000 adj = 64.41% R2 (Intercept) body wt 1.452 Estimate -0.357 4.034 Std. Error 0.692 0.250 t value -0.515 16.119 R2 = 64.66% (a) Write out the linear model. (b) Interpret the intercept. (c) Interpret the slope. (d) Interpret R2. (e) Calculate the correlation coefficient. 8.27 Outliers, Part I. Identify the outliers in the scatterplots shown below, and determine what type of outliers they are. Explain your reasoning. llllllllllllllllllllPercent in PovertyAnnual Murders per Million14%18%22%26%10203040Body weight (kg)Heart weight (g)2.02.53.03.54.05101520(a)(b)(c) 8.2. FITTING A LINE BY LEAST
|
SQUARES REGRESSION 455 8.28 Outliers, Part II. Identify the outliers in the scatterplots shown below and determine what type of outliers they are. Explain your reasoning. 8.29 Urban homeowners, Part I. The scatterplot below shows the percent of families who own their home vs. the percent of the population living in urban areas.19 There are 52 observations, each corresponding to a state in the US. Puerto Rico and District of Columbia are also included. (a) Describe the relationship between the percent of families who own their home and the percent of the population living in urban areas. (b) The outlier at the bottom right corner is District of Columbia, where 100% of the population is considered urban. What type of an outlier is this observation? 8.30 Crawling babies, Part II. Exercise 8.12 introduces data on the average monthly temperature during the month babies first try to crawl (about 6 months after birth) and the average first crawling age for babies born in a given month. A scatterplot of these two variables reveals a potential outlying month when the average temperature is about 53◦F and average crawling age is about 28.5 weeks. Does this point have high leverage? Is it an influential point? 19United States Census Bureau, 2010 Census Urban and Rural Classification and Urban Area Criteria and Housing Characteristics: 2010. (a)(b)(c)Percent Urban Population40%60%80%100%45%55%65%75%Percent Own Their Home 456 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION 8.3 Transformations for skewed data County population size among the counties in the US is very strongly right skewed. Can we apply a transformation to make the distribution more symmetric? How would such a transformation affect the scatterplot and residual plot when another variable is graphed against this variable? In this section, we will see the power of transformations for very skewed data. Learning objectives 1. See how a log transformation can bring symmetry to an extremely skewed variable. 2. Recognize that data can often be transformed to produce a linear relationship, and that this transformation often involves log of the y-values and sometimes log of the x-values. 3. Use residual plots to assess whether a linear model for transformed data is reasonable. 8.3.1 Introduction to transformations EXAMPLE
|
8.25 Consider the histogram of county populations shown in Figure 8.22(a), which shows extreme skew. What isn’t useful about this plot? Nearly all of the data fall into the left-most bin, and the extreme skew obscures many of the potentially interesting details in the data. There are some standard transformations that may be useful for strongly right skewed data where much of the data is positive but clustered near zero. A transformation is a rescaling of the data using a function. For instance, a plot of the logarithm (base 10) of county populations results in the new histogram in Figure 8.22(b). This data is symmetric, and any potential outliers appear much less extreme than in the original data set. By reigning in the outliers and extreme skew, transformations like this often make it easier to build statistical models against the data. Transformations can also be applied to one or both variables in a scatterplot. A scatterplot of the population change from 2010 to 2017 against the population in 2010 is shown in Figure 8.23(a). In this first scatterplot, it’s hard to decipher any interesting patterns because the population variable is so strongly skewed. However, if we apply a log10 transformation to the population variable, as shown in Figure 8.23(b), a positive association between the variables is revealed. While fitting a line to predict population change (2010 to 2017) from population (in 2010) does not seem reasonable, fitting a line to predict population from log10(population) does seem reasonable. original observation) and inverse ( Transformations other than the logarithm can be useful, too. For instance, the square root √ original observation ) are commonly used by data scientists. Com( mon goals in transforming data are to see the data structure differently, reduce skew, assist in modeling, or straighten a nonlinear relationship in a scatterplot. 1 8.3. TRANSFORMATIONS FOR SKEWED DATA 457 (a) (b) Figure 8.22: (a) A histogram of the populations of all US counties. (b) A histogram of log10-transformed county populations. For this plot, the x-value corresponds to the power of 10, e.g. “4” on the x-axis corresponds to 104 = 10,000. (a) (b) Figure 8.
|
23: (a) Scatterplot of population change against the population before the change. (b) A scatterplot of the same data but where the population size has been log-transformed. Population in 2010 (m = millions)0m2m4m6m8m10m050010001500200025003000Frequencylog10(Population in 2010)23456705001000FrequencyPopulation Change from 2010 to 2017−20%0%20%40%0m2m4m6m8m10mPopulation in 2010 (m = millions)log10(Population in 2010)234567−20%0%20%40%Population Change from 2010 to 2017 458 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION 8.3.2 Transformations to achieve linearity Figure 8.24: Variable y is plotted against x. A nonlinear relationship is evident by the ∪-pattern shown in the residual plot. The curvature is also visible in the original plot. EXAMPLE 8.26 Consider the scatterplot and residual plot in Figure 8.24. The regression output is also provided. Is the linear model ˆy = −52.3564 + 2.7842x a good model for the data? The regression equation is y = -52.3564 + 2.7842 x Predictor Constant x Coef -52.3564 2.7842 SE Coef 7.2757 0.1768 T -7.196 15.752 P 3e-08 < 2e-16 S = 13.76 R-Sq = 88.26% R-Sq(adj) = 87.91% We can note the R2 value is fairly large. However, this alone does not mean that the model is good. Another model might be much better. When assessing the appropriateness of a linear model, we should look at the residual plot. The ∪-pattern in the residual plot tells us the original data is curved. If we inspect the two plots, we can see that for small and large values of x we systematically underestimate y, whereas for middle values of x, we systematically overestimate y. The curved trend can also be seen in the original scatterplot. Because of this, the linear model is not appropriate, and it would not be appropriate to perform a t-test for the slope because the conditions for inference are not met. However, we might be able to
|
use a transformation to linearize the data. Regression analysis is easier to perform on linear data. When data are nonlinear, we sometimes transform the data in a way that makes the resulting relationship linear. The most common transformation is log of the y values. Sometimes we also apply a transformation to the x values. We generally use the residuals as a way to evaluate whether the transformed data are more linear. If so, we can say that a better model has been found. y50100150−25025xResiduals2030405060 8.3. TRANSFORMATIONS FOR SKEWED DATA 459 EXAMPLE 8.27 Using the regression output for the transformed data, write the new linear regression equation. The regression equation is log(y) = 1.722540 + 0.052985 x Predictor Constant x Coef 1.722540 0.052985 SE Coef 0.056731 0.001378 T 30.36 38.45 P < 2e-16 < 2e-16 S = 0.1073 R-Sq = 97.82% R-Sq(adj) = 97.75% The linear regression equation can be written as: log(y) = 1.723 + 0.053x Figure 8.25: A plot of log(y) against x. The residuals don’t show any evident patterns, which suggests the transformed data is well-fit by a linear model. GUIDED PRACTICE 8.28 Which of the following statements are true? There may be more than one.20 (a) There is an apparent linear relationship between x and y. (b) There is an apparent linear relationship between x and log(y). (c) The model provided by Regression I (ˆy = −52.3564 + 2.7842x) yields a better fit. (d) The model provided by Regression II ( log(y) = 1.723 + 0.053x) yields a better fit. 20Part (a) is false since there is a nonlinear (curved) trend in the data. Part (b) is true. Since the transformed data shows a stronger linear trend, it is a better fit, i.e. Part (c) is false, and Part (d) is true. log(y)345−0
|
.20.00.2xResiduals2030405060 460 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION Section summary • A transformation is a rescaling of the data using a function. When data are very skewed, a log transformation often results in more symmetric data. • Regression analysis is easier to perform on linear data. When data are nonlinear, we sometimes transform the data in a way that results in a linear relationship. The most common transformation is log of the y-values. Sometimes we also apply a transformation to the x-values. • To assess the model, we look at the residual plot of the transformed data. If the residual plot of the original data has a pattern, but the residual plot of the transformed data has no pattern, a linear model for the transformed data is reasonable, and the transformed model provides a better fit than the simple linear model. 8.3. TRANSFORMATIONS FOR SKEWED DATA 461 Exercises 8.31 Used trucks. The scatterplot below shows the relationship between year and price (in thousands of $) of a random sample of 42 pickup trucks. Also shown is a residuals plot for the linear model for predicting price from year. (a) Describe the relationship between these two variables and comment on whether a linear model is appro- priate for modeling the relationship between year and price. (b) The scatterplot below shows the relationship between logged (natural log) price and year of these trucks, as well as the residuals plot for modeling these data. Comment on which model (linear model from earlier or logged model presented here) is a better fit for these data. (c) The output for the logged model is given below. Interpret the slope in context of the data. (Intercept) Year Estimate -271.981 0.137 Std. Error 25.042 0.013 t value Pr(>|t|) 0.000 -10.861 0.000 10.937 1995200020055101520YearPrice (thousand $)199520002005−505YearResiduals1995200020051.01.52.02.53.0Yearlog(price)199520002005−1.0−0.50.00.51.0YearResiduals 462 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION 8.32 Income and hours worked. The scatter
|
plot below shows the relationship between income and years worked for a random sample of 787 Americans. Also shown is a residuals plot for the linear model for predicting income from hours worked. The data come from the 2012 American Community Survey.21 (a) Describe the relationship between these two variables and comment on whether a linear model is appro- priate for modeling the relationship between year and price. (b) The scatterplot below shows the relationship between logged (natural log) income and hours worked, as well as the residuals plot for modeling these data. Comment on which model (linear model from earlier or logged model presented here) is a better fit for these data. (c) The output for the logged model is given below. Interpret the slope in context of the data. (Intercept) hrs work Estimate 1.017 0.058 Std. Error 0.113 0.003 t value Pr(>|t|) 0.000 0.000 9.000 21.086 21United States Census Bureau. Summary File. 2012 American Community Survey. U.S. Census Bureau’s American Community Survey Office, 2013. Web. 0204060801000100200300400Hours workedIncome (thousand $)020406080100−400−2000200400Hours workedResiduals020406080100−20246Hours workedlog(income)020406080100−4−2024Hours workedResiduals 8.4. INFERENCE FOR THE SLOPE OF A REGRESSION LINE 463 8.4 Inference for the slope of a regression line Here we encounter our last confidence interval and hypothesis test procedures, this time for making inferences about the slope of the population regression line. We can use this to answer questions such as the following: • Is the unemployment rate a significant linear predictor for the loss of the President’s party in the House of Representatives? • On average, how much less in college gift aid do students receive when their parents earn an additional $1000 in income? Learning objectives 1. Recognize that the slope of the sample regression line is a point estimate and has an associated standard error. 2. Be able to read the results of computer regression output and identify the quantities needed for inference for the slope of the regression line, specifically the slope of the sample regression line, the SE of the slope,
|
and the degrees of freedom. 3. State and verify whether or not the conditions are met for inference on the slope of the regression line based using the t-distribution. 4. Carry out a complete confidence interval procedure for the slope of the regression line. 5. Carry out a complete hypothesis test for the slope of the regression line. 6. Distinguish between when to use the t-test for the slope of a regression line and when to use the 1-sample t-test for a mean of differences. 8.4.1 The role of inference for regression parameters Previously, we found the equation of the regression line for predicting gift aid from family income at Elmhurst College. The slope, b, was equal to −0.0431. This is the slope for our sample data. However, the sample was taken from a larger population. We would like to use the slope computed from our sample data to estimate the slope of the population regression line. The equation for the population regression line can be written as µy = α + βx Here, α and β represent two model parameters, namely the y-intercept and the slope of the true or population regression line. (This use of α and β have nothing to do with the α and β we used previously to represent the probability of a Type I Error and Type II Error!) The parameters α and β are estimated using data. We can look at the equation of the regression line calculated from a particular data set: ˆy =a + bx and see that a and b are point estimates for α and β, respectively. If we plug in the values of a and b, the regression equation for predicting gift aid based on family income is: ˆy = 24.3193 − 0.0431x 464 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION The slope of the sample regression line, −0.0431, is our best estimate for the slope of the population regression line, but there is variability in this estimate since it is based on a sample. A different sample would produce a somewhat different estimate of the slope. The standard error of the slope tells us the typical variation in the slope of the sample regression line and the typical error in using this slope to estimate the slope of the population regression line. We would like to construct a 95% confidence interval for β, the slope of the
|
population regression line. As with means, inference for the slope of a regression line is based on the t-distribution. INFERENCE FOR THE SLOPE OF A REGRESSION LINE Inference for the slope of a regression line is based on the t-distribution with n − 2 degrees of freedom, where n is the number of paired observations. Once we verify that conditions for using the t-distribution are met, we will be able to construct the confidence interval for the slope using a critical value t based on n − 2 degrees of freedom. We will use a table of the regression summary to find the point estimate and standard error for the slope. 8.4.2 Conditions for the least squares line Conditions for inference in the context of regression can be more complicated than when dealing with means or proportions. Inference for parameters of a regression line involves the following assumptions: Linearity. The true relationship between the two variables follows a linear trend. We check whether this is reasonable by examining whether the data follows a linear trend. If there is a nonlinear left panel of Figure 8.26), an advanced regression method from another book or trend (e.g. later course should be applied. Nearly normal residuals. For each x-value, the residuals should be nearly normal. When this assumption is found to be unreasonable, it is usually because of outliers or concerns about influential points. An example which suggestions non-normal residuals is shown in the second panel of Figure 8.26. If the sample size n ≥ 30, then this assumption is not necessary. Constant variability. The variability of points around the true least squares line is constant for all values of x. An example of non-constant variability is shown in the third panel of Figure 8.26. Independent. The observations are independent of one other. The observations can be considered independent when they are collected from a random sample or randomized experiment. Be careful of data collected sequentially in what is called a time series. An example of data collected in such a fashion is shown in the fourth panel of Figure 8.26. We see in Figure 8.26, that patterns in the residual plots suggest that the assumptions for regression inference are not met in those four examples. In fact, identifying nonlinear trends in the data, outliers, and non-constant variability in the residuals are often easier to detect in a residual plot than in a scatterplot. We note that the
|
second assumption regarding nearly normal residuals is particularly difficult to assess when the sample size is small. We can make a graph, such as a histogram, of the residuals, but we cannot expect a small data set to be nearly normal. All we can do is to look for excessive skew or outliers. Outliers and influential points in the data can be seen from the residual plot as well as from a histogram of the residuals. CONDITIONS FOR INFERENCE ON THE SLOPE OF A REGRESSION LINE 1. The data is collected from a random sample or randomized experiment. 2. The residual plot appears as a random cloud of points and does not have any patterns or significant outliers that would suggest that the linearity, nearly normal residuals, constant variability, or independence assumptions are unreasonable. 8.4. INFERENCE FOR THE SLOPE OF A REGRESSION LINE 465 Figure 8.26: Four examples showing when the inference methods in this chapter are insufficient to apply to the data. In the left panel, a straight line does not fit the data. In the second panel, there are outliers; two points on the left are relatively distant from the rest of the data, and one of these points is very far away from the line. In the third panel, the variability of the data around the line increases with larger values of x. In the last panel, a time series data set is shown, where successive observations are highly correlated. Figure 8.27: Left: Scatterplot of gift aid versus family income for 50 freshmen at Elmhurst college. Right: Residual plot for the model shown in left panel. 8.4.3 Constructing a confidence interval for the slope of a regression line We would like to construct a confidence interval for the slope of the regression line for predicting gift aid based on family income for all freshmen at Elmhurst college. Do conditions seem to be satisfied? We recall that the 50 freshmen in the sample were randomly chosen, so the observations are independent. Next, we need to look carefully at the scatterplot and the residual plot. ALWAYS CHECK CONDITIONS Do not blindly apply formulas or rely on regression output; always first look at a scatterplot or a residual plot. If conditions for fitting the regression line are not
|
met, the methods presented here should not be applied. The scatterplot seems to show a linear trend, which matches the fact that there is no curved trend apparent in the residual plot. Also, the standard deviation of the residuals is mostly constant for different x values and there are no outliers or influential points. There are no patterns in the residual plot that would suggest that a linear model is not appropriate, so the conditions are reasonably met. We are now ready to calculate the 95% confidence interval. xxyg$residualsxyg$residualsxyg$residualsFamily Income ($1000s)0501001502002500102030Gift aid from university ($1000s)Family Income ($1000s)050100150200250−12−8−404812Residuals 466 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION (Intercept) family income Estimate 24.3193 -0.0431 Std. Error 1.2915 0.0108 t value Pr(>|t|) 0.0000 0.0002 18.83 -3.98 Figure 8.28: Summary of least squares fit for the Elmhurst College data, where we are predicting gift aid by the university based on the family income of students. EXAMPLE 8.29 Construct a 95% confidence interval for the slope of the regression line for predicting gift aid from family income at Elmhurst college. As usual, the confidence interval will take the form: point estimate ± critical value × SE of estimate The point estimate for the slope of the population regression line is the slope of the sample regression line: −0.0431. The standard error of the slope can be read from the table as 0.0108. Note that we do not need to divide 0.0108 by the square root of n or do any further calculations on 0.0108; 0.0108 is the SE of the slope. Note that the value of t given in the table refers to the test statistic, not to the critical value t. To find t we can use a t-table. Here n = 50, so df = 50 − 2 = 48. Using a t-table, we round down to row df = 40 and we estimate the critical value t = 2.021 for
|
a 95% confidence level. The confidence interval is calculated as: −0.0431 ± 2.021 × 0.0108 = (−0.065, −0.021) t using exactly 48 degrees of freedom is equal to 2.01 and gives the same interval of Note: (−0.065, −0.021). EXAMPLE 8.30 Intepret the confidence interval in context. What can we conclude? We are 95% confident that the slope of the population regression line, the true average change in gift aid for each additional $1000 in family income, is between −$0.065 thousand dollars and −$0.021 thousand dollars. That is, we are 95% confident that, on average, when family income is $1000 higher, gift aid is between $21 and $65 lower. Because the entire interval is negative, we have evidence that the slope of the population regression In other words, we have evidence that there is a significant negative linear line is less than 0. relationship between gift aid and family income. 8.4. INFERENCE FOR THE SLOPE OF A REGRESSION LINE 467 CONSTRUCTING A CONFIDENCE INTERVAL FOR THE SLOPE OF REGRESSION LINE To carry out a complete confidence interval procedure to estimate the slope of the population regression line β, Identify: Identify the parameter and the confidence level, C%. the slope of the The parameter will be a slope of the population regression line, e.g. population regression line relating air quality index to average rainfall per year for each city in the United States. Choose: Choose the correct interval procedure and identify it by name. To estimate the slope of a regression model we use a ttt-interval for the slope. Check: Check conditions for using a t-interval for the slope. 1. Independence: Data should come from a random sample or randomized experiment. If sampling without replacement, check that the sample size is less than 10% of the population size. 2. Linearity: Check that the scatterplot does not show a curved trend and that the residual plot shows no ∪-shape pattern. 3. Constant variability: Use the residual plot to check that the standard deviation of the residuals is constant across all x-values. 4. Norm
|
ality: The population of residuals is nearly normal or the sample size is ≥ 30. If the sample size is less than 30 check for strong skew or outliers in the sample residuals. If neither is found, then the condition that the population of residuals is nearly normal is considered reasonable. Calculate: Calculate the confidence interval and record it in interval form. point estimate ± t × SE of estimate, df = n − 2 point estimate: the slope b of the sample regression line SE of estimate: SE of slope (find using computer output) t: use a t-distribution with df = n − 2 and confidence level C% (, ) Conclude: Interpret the interval and, if applicable, draw a conclusion in context. We are C% confident that the true slope of the regression line, the average change in [y] for each unit increase in [x], is between. If applicable, draw a conclusion based on whether the interval is entirely above, is entirely below, or contains the value 0. and 468 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION Figure 8.29: Left: Scatterplot of head length versus total length for 104 brushtail possums. Right: Residual plot for the model shown in left panel. EXAMPLE 8.31 The regression summary below shows statistical software output from fitting the least squares length for 104 brushtail possums. The regression line for predicting head length from total scatterplot and residual plot are shown above. Predictor Constant total length Coef 42.70979 0.57290 SE Coef 5.17281 0.05933 T 8.257 9.657 P 5.66e-13 4.68e-16 S = 2.595 R-Sq = 47.76% R-Sq(adj) = 47.25% Construct a 95% confidence interval for the slope of the regression line. evidence that there is a positive, linear relationship between head length and total length? Is there convincing Identify: The parameter of interest is the slope of the population regression line for predicting head length from body length. We want to estimate this at the 95% confidence level. Choose: Because the parameter to be estimated is the slope of a regression line, we will use the t-interval for the slope. Check: These
|
data come from a random sample. The residual plot shows no pattern so a linear model seems reasonable. The residual plot also shows that the residuals have constant standard deviation. Finally, n = 104 ≥ 30 so we do not have to check for skew in the residuals. All four conditions are met. Calculate: We will calculate the interval: point estimate ± t × SE of estimate We read the slope of the sample regression line and the corresponding SE from the table. The point estimate is b = 0.57290. The SE of the slope is 0.05933, which can be found next to the slope of 0.57290. The degrees of freedom is df = n − 2 = 104 − 2 = 102. As before, we find the critical value t using a t-table (the t value is not the same as the T -statistic for the hypothesis test). Using the t-table at row df = 100 (round down since 102 is not on the table) and confidence level 95%, we get t = 1.984. So the 95% confidence interval is given by: 0.57290 ± 1.984 × 0.05933 (0.456, 0.691) Conclude: We are 95% confident that the slope of the population regression line is between 0.456 and 0.691. That is, we are 95% confident that the true average increase in head length for each additional cm in total length is between 0.456 mm and 0.691 mm. Because the interval is entirely above 0, we do have evidence of a positive linear association between the head length and body length for brushtail possums. Total Length (cm)Head Length (mm)7580859095859095100Total Length (cm)Residuals7580859095−505 8.4. INFERENCE FOR THE SLOPE OF A REGRESSION LINE 469 8.4.4 Midterm elections and unemployment Elections for members of the United States House of Representatives occur every two years, coinciding every four years with the U.S. Presidential election. The set of House elections occurring during the middle of a Presidential term are called midterm elections. In America’s two-party system, one political theory suggests the higher the unemployment rate, the worse the President’s party will do in the midterm elections. To
|
assess the validity of this claim, we can compile historical data and look for a connection. We consider every midterm election from 1898 to 2018, with the exception of those elections during the Great Depression. Figure 8.30 shows these data and the least-squares regression line: % change in House seats for President’s party = −7.36 − 0.89 × (unemployment rate) We consider the percent change in the number of seats of the President’s party (e.g. percent change in the number of seats for Republicans in 2018) against the unemployment rate. Examining the data, there are no clear deviations from linearity, the constant variance condition, or the normality of residuals. While the data are collected sequentially, a separate analysis was used to check for any apparent correlation between successive observations; no such correlation was found. Figure 8.30: The percent change in House seats for the President’s party in each election from 1898 to 2018 plotted against the unemployment rate. The two points for the Great Depression have been removed, and a least squares regression line has been fit to the data. Explore this data set on Tableau Public. Percent Change in Seats ofPresident's Party in House of Rep.Percent Unemploymentllllllllllll4%8%12%−30%−20%−10%0%10%McKinley1898Reagan1982Clinton1994Bush2002Obama2010Trump2018lDemocratRepublican 470 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION GUIDED PRACTICE 8.32 The data for the Great Depression (1934 and 1938) were removed because the unemployment rate was 21% and 18%, respectively. Do you agree that they should be removed for this investigation? Why or why not?22 There is a negative slope in the line shown in Figure 8.30. However, this slope (and the yintercept) are only estimates of the parameter values. We might wonder, is this convincing evidence that the “true” linear model has a negative slope? That is, do the data provide strong evidence that the political theory is accurate? We can frame this investigation as a statistical hypothesis test: H0: β = 0. The true linear model has slope zero. HA: β < 0. The true linear model has a slope less than zero. The higher the unemployment, the greater the loss for the President’s party in the House of Representatives. We would
|
reject H0 in favor of HA if the data provide strong evidence that the slope of the population regression line is less than zero. To assess the hypotheses, we identify a standard error for the estimate, compute an appropriate test statistic, and identify the p-value. Before we calculate these quantities, how good are we at visually determining from a scatterplot when a slope is significantly less than or greater than 0? And why do we tend to use a 0.05 significance level as our cutoff? Try out the following activity which will help answer these questions. TESTING FOR THE SLOPE USING A CUTOFF OF 0.05 What does it mean to say that the slope of the population regression line is significantly greater than 0? And why do we tend to use a cutoff of α = 0.05? This 5-minute interactive task will explain: www.openintro.org/why05 22We will provide two considerations. Each of these points would have very high leverage on any least-squares regression line, and years with such high unemployment may not help us understand what would happen in other years where the unemployment is only modestly high. On the other hand, these are exceptional cases, and we would be discarding important information if we exclude them from a final analysis. 8.4. INFERENCE FOR THE SLOPE OF A REGRESSION LINE 471 8.4.5 Understanding regression output from software The residual plot shown in Figure 8.31 shows no pattern that would indicate that a linear model is inappropriate. Therefore we can carry out a test on the population slope using the sample slope as our point estimate. Just as for other point estimates we have seen before, we can compute a standard error and test statistic for b. The test statistic T follows a t-distribution with n − 2 degrees of freedom. Figure 8.31: The residual plot shows no pattern that would indicate that a linear model is inappropriate. Explore this data set on Tableau Public. HYPOTHESIS TESTS ON THE SLOPE OF THE REGRESSION LINE Use a t-test with n − 2 degrees of freedom when performing a hypothesis test on the slope of a regression line. We will rely on statistical software to compute the standard error and leave the explanation of how this standard error is determined to a second or third statistics course. Figure 8.32 shows software
|
output for the least squares regression line in Figure 8.30. The row labeled unemp represents the information for the slope, which is the coefficient of the unemployment variable. (Intercept) unemp Estimate -7.3644 -0.8897 Std. Error 5.1553 0.8350 t value Pr(>|t|) 0.1646 0.2961 -1.43 -1.07 Figure 8.32: Least squares regression summary for the percent change in seats of president’s party in House of Reprepsentatives based on percent unemployment. ResidualsPercent Unemploymentllllllllllll4%8%12%−10%0%10%lDemocratRepublican 472 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION Figure 8.33: The distribution shown here is the sampling distribution for b, if the null hypothesis was true. The shaded tail represents the p-value for the hypothesis test evaluating whether there is convincing evidence that higher unemployment corresponds to a greater loss of House seats for the President’s party during a midterm election. EXAMPLE 8.33 What do the first column of numbers in the regression summary represent? The entries in the first column represent the least squares estimates for the y-intercept and slope, a and b respectively. Using this information, we could write the equation for the least squares regression line as ˆy = −7.3644 − 0.8897x where y in this case represents the percent change in the number of seats for the president’s party, and x represents the unemployment rate. We previously used a test statistic T for hypothesis testing in the context of means. Regression is very similar. Here, the point estimate is b = −0.8897. The SE of the estimate is 0.8350, which is given in the second column, next to the estimate of b. This SE represents the typical error when using the slope of the sample regression line to estimate the slope of the population regression line. The null value for the slope is 0, so we now have everything we need to compute the test statistic. We have: T = point estimate − null value SE of estimate = −0.8897 − 0 0.8350 = −1.07 This value corresponds to the T -score reported in the regression output in the third column along the un
|
emp row. EXAMPLE 8.34 In this example, the sample size n = 27. hypothesis test. Identify the degrees of freedom and p-value for the The degrees of freedom for this test is n − 2, or df = 27 − 2 = 25. We could use a table or a calculator to find the probability of a value less than -1.07 under the t-distribution with 25 degrees of freedom. However, the two-side p-value is given in Figure 8.32, next to the corresponding tstatistic. Because we have a one-sided alternative hypothesis, we take half of this. The p-value for the test is 0.2961 2 = 0.148. −2.62−1.74−0.8700.871.742.62 8.4. INFERENCE FOR THE SLOPE OF A REGRESSION LINE 473 Because the p-value is so large, we do not reject the null hypothesis. That is, the data do not provide convincing evidence that a higher unemployment rate is associated with a larger loss for the President’s party in the House of Representatives in midterm elections. DON’T CARELESSLY USE THE P-VALUE FROM REGRESSION OUTPUT The last column in regression output often lists p-values for one particular hypothesis: a twosided test where the null value is zero. If your test is one-sided and the point estimate is in the direction of HA, then you can halve the software’s p-value to get the one-tail area. If neither of these scenarios match your hypothesis test, be cautious about using the software output to obtain the p-value. HYPOTHESIS TEST FOR THE SLOPE OF REGRESSION LINE To carry out a complete hypothesis test for the claim that there is no linear relationship between two numerical variables, i.e. that β = 0, Identify: Identify the hypotheses and the significance level, α. H0: β = 0 HA: β = 0; HA: β > 0; or HA: β < 0 Choose: Choose the correct test procedure and identify it by name. To test hypotheses about the slope of a regression model we use a ttt-test for the slope. Check: Check conditions for using a t-test for the slope. 1. Independence: Data should come from a random sample or randomized experiment. If sampling without replacement
|
, check that the sample size is less than 10% of the population size. 2. Linearity: Check that the scatterplot does not show a curved trend and that the residual plot shows no ∪-shape pattern. 3. Constant variability: Use the residual plot to check that the standard deviation of the residuals is constant across all x-values. 4. Normality: The population of residuals is nearly normal or the sample size is ≥ 30. If the sample size is less than 30 check for strong skew or outliers in the sample residuals. If neither is found, then the condition that the population of residuals is nearly normal is considered reasonable. Calculate: Calculate the t-statistic, df, and p-value. T = point estimate − null value SE of estimate, df = n − 2 point estimate: the slope b of the sample regression line SE of estimate: SE of slope (find using computer output) null value: 0 p-value = (based on the t-statistic, the df, and the direction of HA) Conclude: Compare the p-value to α, and draw a conclusion in context. If the p-value is < α, reject H0; there is sufficient evidence that [HA in context]. If the p-value is > α, do not reject H0; there is not sufficient evidence that [HA in context]. 474 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION EXAMPLE 8.35 The regression summary below shows statistical software output from fitting the least squares regression line for predicting gift aid based on family income for 50 randomly selected freshman students at Elmhurst College. The scatterplot and residual plot were shown in Figure 8.27. Predictor Constant family income Coef 24.31933 -0.04307 SE Coef 1.29145 0.01081 T 18.831 -3.985 P < 2e-16 0.000229 S = 4.783 R-Sq = 24.86% R-Sq(adj) = 23.29% Do these data provide convincing evidence that there is a negative, linear relationship between family income and gift aid? Carry out a complete hypothesis test at the 0.05 significance level. Use the five step framework to organize your work. Identify: We will test the following hypotheses at the α =
|
0.05 significance level. H0: β = 0. There is no linear relationship. HA: β < 0. There is a negative linear relationship. Here, β is the slope of the population regression line for predicting gift aid from family income at Elmhurst College. Choose: Because the hypotheses are about the slope of a regression line, we choose the t-test for a slope. Check: The data come from a random sample of less than 10% of the total population of freshman students at Elmhurst College. The lack of any pattern in the residual plot indicates that a linear model is reasonable. Also, the residual plot shows that the residuals have constant variance. Finally, n = 50 ≥ 30 so we do not have to worry too much about any skew in the residuals. All four conditions are met. Calculate: We will calculate the t-statistic, degrees of freedom, and the p-value. T = point estimate − null value SE of estimate We read the slope of the sample regression line and the corresponding SE from the table. The point estimate is: b = −0.04307. The SE of the slope is: SE = 0.01081. T = −0.04307 − 0 0.01081 = −3.985 Because HA uses a less than sign (<), meaning that it is a lower-tail test, the p-value is the area to the left of t = −3.985 under the t-distribution with 50 − 2 = 48 degrees of freedom. The p-value = 1 2 (0.000229) ≈ 0.0001. Conclude: The p-value of 0.0001 is < 0.05, so we reject H0; there is sufficient evidence that there is a negative linear relationship between family income and gift aid at Elmhurst College. Guided Practice 8.36 In context, interpret the p-value from the previous example.23 23Assuming that the probability model is true and assuming that the null hypothesis is true, i.e. there really is no linear relationship between family income and gift aid at Elmhurst College, there is only a 0.0001 chance of getting a test statistic this small or smaller (HA uses a <, so the p-value represents the area in the left tail). Because this value is so small, we reject the null hypothesis. 8.4. INFERENCE FOR THE SLOPE OF A REGRESSION
|
LINE 475 8.4.6 Technology: the ttt-test/interval for the slope We generally rely on regression output from statistical software programs to provide us with the necessary quantities: b and SE of b. However we can also find the test statistic, p-value, and confidence interval using Desmos or a handheld calculator. Get started quickly with this Desmos T-Test/Interval Calculator (available at openintro.org/ahss/desmos). For instructions on implementing the T-Test/Interval on the TI or Casio, see the Graphing Calculator Guides at openintro.org/ahss. 8.4.7 Which inference procedure to use for paired data? In Section 7.2.4, we looked at a set of paired data involving the price of textbooks for UCLA courses at the UCLA Bookstore and on Amazon. The left panel of Figure 8.34 shows the difference in price (UCLA Bookstore − Amazon) for each book. Because we have two data points on each textbook, it also makes sense to construct a scatterplot, as seen in the right panel of Figure 8.34. Figure 8.34: Left: histogram of the difference (UCLA Bookstore price - Amazon price) for each book sampled. Right: scatterplot of Amazon Price versus UCLA Bookstore price. EXAMPLE 8.37 What additional information does the scatterplot provide about the price of textbooks at UCLA Bookstore and on Amazon? With a scatterplot, we see the relationship between the variables. We can see that when UCLA Bookstore price is larger, Amazon price also tends to be larger. We can consider the strength of the correlation and we can draw the linear regression equation for predicting Amazon price from UCLA Bookstore price. UCLA Bookstore Price − Amazon Price (USD)Frequency−$20$0$20$40$60$800102030UCLA Bookstore Price$0$50$100$150$200$0$50$100$150$200Amazon Price 476 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION EXAMPLE 8.38 Which test should we do if we want to check whether: 1. prices for textbooks for UCLA courses are higher at the UCLA Bookstore than on Amazon 2. there is a significant, positive linear relationship between UCLA Bookstore price and Amazon price? In
|
the first case, we are interested in whether the differences (UCLA Bookstore − Amazon) for all UCLA textbooks are, on average, greater than 0, so we would do a 1-sample t-test for a mean of In the second case, we are interested in whether the slope of the regression line for differences. predicting Amazon price from UCLA Bookstore price is significantly greater than 0, so we would do a t-test for the slope of a regression line. Likewise, a 1-sample t-interval for a mean of differences would provide an interval of reasonable values for the mean of differences in textbook price between UCLA Bookstore and Amazon (for all UCLA textbooks), while a t-interval for the slope would provide an interval of reasonable values for the slope of the regression line for predicting Amazon price from UCLA Bookstore price (for all UCLA textbooks). INFERENCE FOR PAIRED DATA A 1-sample t-interval or t-test for a mean of differences only makes sense when we are asking whether, on average, one variable is greater than, less than or different from another (think histogram of the differences). A t-interval or t-test for the slope of a regression line makes sense when we are interested in the linear relationship between them (think scatterplot). EXAMPLE 8.39 Previously, we looked at the relationship betweeen body length and head length for bushtail possums. We also looked at the relationship between gift aid and family income for freshmen at Elmhurst College. Could we do a 1-sample t-test in either of these scenarios? We have to ask ourselves, does it make sense to ask whether, on average, body length is greater than head length? Similarly, does it make sense to ask whether, on average, gift aid is greater than family income? These don’t seem to be meaningful research questions; a 1-sample t-test for a mean of differences would not be useful here. GUIDED PRACTICE 8.40 A teacher gives her class a pretest and a posttest. Does this result in paired data? If so, which hypothesis test should she use?24 24Yes, there are two observations for each individual, so there is paired data. The appropriate test depends upon the question she
|
wants to ask. If she is interested in whether, on average, students do better on the posttest than the pretest, should use a 1-sample t-test for a mean of differences. If she is interested in whether pretest score is a significant linear predictor of posttest score, she should do a t-test for the slope. In this situation, both tests could be useful, but which one should be used is dependent on the teacher’s research question. 8.4. INFERENCE FOR THE SLOPE OF A REGRESSION LINE 477 Section summary In Chapter 6, we used a χ2 test for independence to test for association between two categorical variables. In this section, we test for association/correlation between two numerical variables. • We use the slope b as a point estimate for the slope β of the population regression line. The slope of the population regression line is the true increase/decrease in y for each unit increase in x. If the slope of the population regression line is 0, there is no linear relationship between the two variables. • Under certain assumptions, the sampling distribution for b is normal and the distribution of the standardized test statistic using the standard error of the slope follows a ttt-distribution with n − 2 degrees of freedom. • When there is (x, y) data and the parameter of interest is the slope of the population regression line, e.g. the slope of the population regression line relating air quality index to average rainfall per year for each city in the United States: – Estimate β at the C% confidence level using a ttt-interval for the slope. – Test H0: β = 0 at the α significance level using a ttt-test for the slope. • The conditions for the t-interval and t-test for the slope of a regression line are the same. 1. Independence: Data come from a random sample or randomized experiment. If sampling without replacement, check that the sample size is less than 10% of the population size. 2. Linearity: Check that the scatterplot does not show a curved trend and that the residual plot shows no ∪-shape pattern. 3. Constant variability: Use the residual plot to check that the standard deviation of the residuals is constant across all x-values. 3. Normality: The population of residuals is nearly normal or the
|
sample size is ≥ 30. If the sample size is less than 30 check for strong skew or outliers in the sample residuals. If neither is found, then the condition that the population of residuals is nearly normal is considered reasonable. • The confidence interval and test statistic are calculated as follows: Confidence interval: point estimate ± t × SE of estimate, or Test statistic: T = point estimate − null value and p-value SE of estimate point estimate: the slope b of the sample regression line SE of estimate: SE of slope (find using computer output) df = n − 2 • The confidence interval for the slope of the population regression line estimates the true average increase in the y-variable for each unit increase in the x-variable. • The t-test for the slope and the 1-sample t-test for a mean of differences both involve paired, numerical data. However, the t-test for the slope asks if the two variables have a linear relationship, specifically if the slope of the population regression line is different from 0. The 1-sample t-test for a mean of differences, on the other hand, asks if the two variables are, on average, different, specifically if the mean of the population differences is not equal to 0. 478 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION Exercises 8.33 Body measurements, Part IV. The scatterplot and least squares summary below show the relationship between weight measured in kilograms and height measured in centimeters of 507 physically active individuals. (Intercept) height Estimate -105.0113 1.0176 Std. Error 7.5394 0.0440 t value -13.93 23.13 Pr(>|t|) 0.0000 0.0000 (a) Describe the relationship between height and weight. (b) Write the equation of the regression line. Interpret the slope and intercept in context. (c) Do the data provide strong evidence that an increase in height is associated with an increase in weight? State the null and alternative hypotheses, report the p-value, and state your conclusion. (d) The correlation coefficient for height and weight is 0.72. Calculate R2 and interpret it in
|
context. 8.34 MCU, predict US theater sales. The Marvel Comic Universe movies were an international movie sensation, containing 23 movies at the time of this writing. Here we consider a model predicting an MCU film’s gross theater sales in the US based on the first weekend sales performance in the US. The data are presented below in both a scatterplot and the model in a regression table. Scientific notation is used below, e.g. 42.5e6 corresponds to 42.5 × 106. (Intercept) opening weekend us Estimate 42.5e6 2.4361 Std. Error 26.6e6 0.1739 t value Pr(>|t|) 0.1251 0.0000 1.60 14.01 (a) Describe the relationship between gross theater sales in the US and first weekend sales in the US. (b) Write the equation of the regression line. Inter- pret the slope and intercept in context. (c) Do the data provide strong evidence that higher opening weekend sale is associated with higher gross theater sales? State the null and alternative hypotheses, report the p-value, and state your conclusion. (d) The correlation coefficient for gross sales and first weekend sales is 0.950. Calculate R2 and interpret it in context. (e) Suppose we consider a set of all films ever released. Do you think the relationship between opening weekend sales and total sales would have as strong of a relationship as what we see with the MCU films? Height (cm)Weight (kg)150175200507090110US Opening Weekend Sales$0$100m$200m$300m$0$200m$400m$600m$800mUS Theater Sales 8.4. INFERENCE FOR THE SLOPE OF A REGRESSION LINE 479 8.35 Spouses, Part II. The scatterplot below summarizes womens’ heights and their spouses’ heights for a random sample of 170 married women in Britain, where both partners’ ages are below 65 years. Summary output of the least squares fit for predicting spouse’s height from the woman’s height is also provided in the table. (Intercept) height spouse Estimate 43.5755 0.2863 Std. Error
|
4.6842 0.0686 t value 9.30 4.17 Pr(>|t|) 0.0000 0.0000 (a) Is there strong evidence in this sample that taller women have taller spouses? State the hypotheses and include any information used to conduct the test. (b) Write the equation of the regression line for predicting the height of a woman’s spouse based on the woman’s height. (c) Interpret the slope and intercept in the context of the application. (d) Given that R2 = 0.09, what is the correlation of heights in this data set? (e) You meet a married woman from Britain who is 5’9” (69 inches). What would you predict her spouse’s height to be? How reliable is this prediction? (f) You meet another married woman from Britain who is 6’7” (79 inches). Would it be wise to use the same linear model to predict her spouse’s height? Why or why not? 8.36 Urban homeowners, Part II. Exercise 8.29 gives a scatterplot displaying the relationship between the percent of families that own their home and the percent of the population living in urban areas. Below is a similar scatterplot, excluding District of Columbia, as well as the residuals plot. There were 51 cases. (a) For these data, R2 = 0.28. What is the correlation? How can you tell if it is positive or negative? (b) Examine the residual plot. What do you observe? Is a simple least squares fit appropriate for these data? Woman's height (in inches)Spouse's height (in inches)6065707555606570% Urban population% Who own home4060805560657075−10010 480 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION Exercise 8.25 presents regression output from a model for predicting 8.37 Murders and poverty, Part II. annual murders per million from percentage living in poverty based on a random sample of 20 metropolitan areas. The model output is also provided below. (Intercept) poverty% Estimate -29.901 2.559 Std. Error 7.789 0.390 t value Pr(>|t|) 0.001 -3.839 0.000 6.562 s = 5.512 R2 = 70.52% R2 adj = 68.89%
|
(a) What are the hypotheses for evaluating whether poverty percentage is a significant predictor of murder rate? (b) State the conclusion of the hypothesis test from part (a) in context of the data. (c) Calculate a 95% confidence interval for the slope of poverty percentage, and interpret it in context of the data. (d) Do your results from the hypothesis test and the confidence interval agree? Explain. 8.38 Babies. Is the gestational age (time between conception and birth) of a low birth-weight baby useful in predicting head circumference at birth? Twenty-five low birth-weight babies were studied at a Harvard teaching hospital; the investigators calculated the regression of head circumference (measured in centimeters) against gestational age (measured in weeks). The estimated regression line is head circumf erence = 3.91 + 0.78 × gestational age The standard error for the coefficient of gestational age is 0.35. Is there significance evidence that gestational age has a positive linear association with head circumference? Use the Identify, Choose, Check, Calculate, Conclude framework and make sure to identify any assumptions used in the test. 8.4. INFERENCE FOR THE SLOPE OF A REGRESSION LINE 481 Chapter highlights This chapter focused on describing the linear association between two numerical variables and fitting a linear model. • The correlation coefficient, r, measures the strength and direction of the linear association between two variables. However, r alone cannot tell us whether data follow a linear trend or whether a linear model is appropriate. • The explained variance, R2, measures the proportion of variation in the y values explained by a given model. Like r, R2 alone cannot tell us whether data follow a linear trend or whether a linear model is appropriate. • Every analysis should begin with graphing the data using a scatterplot in order to see the association and any deviations from the trend (outliers or influential values). A residual plot helps us better see patterns in the data. • When the data show a linear trend, we fit a least squares regression line of the form: ˆy = a + bx, where a is the y-intercept and b is the slope. It is important to be able to calculate a
|
and b using the summary statistics and to interpret them in the context of the data. • A residual, y − ˆy, measures the error for an individual point. The standard deviation of the residuals, s, measures the typical size of the residuals. • ˆy = a + bx provides the best fit line for the observed data. To estimate or hypothesize about the slope of the population regression line, first confirm that the residual plot has no pattern and that a linear model is reasonable, then use a ttt-interval for the slope or a ttt-test for the slope with n − 2 degrees of freedom. In this chapter we focused on simple linear models with one explanatory variable. More complex methods of prediction, such as multiple regression (more than one explanatory variable) and nonlinear regression can be studied in a future course. 482 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION Chapter exercises 8.39 True / False. Determine if the following statements are true or false. If false, explain why. (a) A correlation coefficient of -0.90 indicates a stronger linear relationship than a correlation of 0.5. (b) Correlation is a measure of the association between any two variables. 8.40 Cats, Part II. Exercise 8.26 presents regression output from a model for predicting the heart weight (in g) of cats from their body weight (in kg). The coefficients are estimated using a dataset of 144 domestic cat. The model output is also provided below. Assume that conditions for inference on the slope are met. (Intercept) body wt Estimate -0.357 4.034 Std. Error 0.692 0.250 t value Pr(>|t|) 0.607 -0.515 0.000 16.119 s = 1.452 R2 = 64.66% R2 adj = 64.41% (a) What are the hypotheses for evaluating whether body weight is associated with heart weight in cats? (b) State the conclusion of the hypothesis test from part (a) in context of the data. (c) Calculate a 95% confidence interval for the slope of body weight, and interpret it in context of the data. (d) Do your results from the hypothesis test and the confidence interval
|
agree? Explain. 8.41 Nutrition at Starbucks, Part II. Exercise 8.22 introduced a data set on nutrition information on Starbucks food menu items. Based on the scatterplot and the residual plot provided, describe the relationship between the protein content and calories of these menu items, and determine if a simple linear model is appropriate to predict amount of protein from the number of calories. 8.42 Helmets and lunches. The scatterplot shows the relationship between socioeconomic status measured as the percentage of children in a neighborhood receiving reduced-fee lunches at school (lunch) and the percentage of bike riders in the neighborhood wearing helmets (helmet). The average percentage of children receiving reduced-fee lunches is 30.8% with a standard deviation of 26.7% and the average percentage of bike riders wearing helmets is 38.8% with a standard deviation of 16.9%. (a) If the R2 for the least-squares regression line for these data is 72%, what is the correlation between lunch and helmet? (b) Calculate the slope and intercept for the least-squares regression line for these data. (c) Interpret the intercept of the least-squares regression line in the context of the application. (d) Interpret the slope of the least-squares regression line in the context of the application. (e) What would the value of the residual be for a neighborhood where 40% of the children receive reduced-fee lunches and 40% of the bike riders wear helmets? Interpret the meaning of this residual in the context of the application. CaloriesProtein (grams)1002003004005000102030−20020llllllllllllRate of Receiving a Reduced−Fee Lunch0%20%40%60%80%0%20%40%60%Rate of Wearing a Helmet 8.4. INFERENCE FOR THE SLOPE OF A REGRESSION LINE 483 8.43 Match the correlation, Part III. Match each correlation to the corresponding scatterplot. (a) r = −0.72 (b) r = 0.07 (c) r = 0.86 (d) r = 0.99 8.44 Rate my professor. Many college courses conclude by giving students the opportunity to evaluate the course and the instructor anonymously. However, the use of these student evaluations as an indicator of course quality and teaching effectiveness is often criticized because these measures may reflect the in�
|
��uence of non-teaching related characteristics, such as the physical appearance of the instructor. Researchers at University of Texas, Austin collected data on teaching evaluation score (higher score means better) and standardized beauty score (a score of 0 means average, negative score means below average, and a positive score means above average) for a sample of 463 professors.25 The scatterplot below shows the relationship between these variables, and regression output is provided for predicting teaching evaluation score from beauty score. (Intercept) beauty Estimate 4.010 Cell 1 Std. Error 0.0255 0.0322 t value Pr(>|t|) 0.0000 157.21 0.0000 4.13 (a) Given that the average standardized beauty score is -0.0883 and average teaching evaluation score is 3.9983, calculate the slope. Alternatively, the slope may be computed using just the information provided in the model summary table. (b) Do these data provide convincing evidence that the slope of the relationship between teaching evaluation and beauty is positive? Explain your reasoning. (c) List the conditions required for linear regression and check if each one is satisfied for this model based on the following diagnostic plots. 25Daniel S Hamermesh and Amy Parker. “Beauty in the classroom: Instructors’ pulchritude and putative peda- gogical productivity”. In: Economics of Education Review 24.4 (2005), pp. 369–376. (1)(2)(3)(4)BeautyTeaching evaluation−10122345BeautyResiduals−1012−101Residuals−2−1012050100150Order of data collectionResiduals0100200300400−101 484 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION 8.45 Trees. The scatterplots below show the relationship between height, diameter, and volume of timber in 31 felled black cherry trees. The diameter of the tree is measured 4.5 feet above the ground.26 (a) Describe the relationship between volume and height of these trees. (b) Describe the relationship between volume and diameter of these trees. (c) Suppose you have height and diameter measurements for another black cherry tree. Which of these variables would be preferable to use to predict the volume of timber in this tree using a simple linear regression model? Explain your reasoning. 26Source: R Dataset,
|
stat.ethz.ch/R-manual/R-patched/library/datasets/html/trees.html. lllllllllllllllllllllllllllllllHeight (feet)Volume (cubic feet)60708090020406080lllllllllllllllllllllllllllllllDiameter (inches)Volume (cubic feet)8121620020406080 485 Appendix A Exercise solutions 1 Data collection 1.1 (a) Treatment: 10/43 = 0.23 → 23%. (b) Control: 2/46 = 0.04 → 4%. (c) A higher percentage of patients in the treatment group were pain free 24 hours after receiving acupuncture. (c) It is possible that the observed difference between the two group percentages is due to chance. 1.3 (a) “Is there an association between air pollution exposure and preterm births?” (b) 143,196 births in Southern California between 1989 and 1993. (c) Measurements of carbon monoxide, nitrogen dioxide, ozone, and particulate matter less than 10µg/m3 (PM10) collected at air-quality-monitoring stations as well as length of gestation. Continous numerical variables. 1.5 (a) “Does explicitly telling children not to cheat affect their likelihood to cheat?”. (b) 160 children between the ages of 5 and 15. (c) Four variables: (1) age (numerical, continuous), (2) sex (categorical), (3) whether they were an only child or not (categorical), (4) whether they cheated or not (categorical). 1.7 Explanatory: acupuncture or not. Response: if the patient was pain free or not. 1.9 (a) 50 × 3 = 150. (b) Four continuous numerical variables: sepal length, sepal width, petal length, and petal width. (c) One categorical variable, species, with three levels: setosa, versicolor, and virginica. 1.11 (a) Airport ownership status (public/private), airport usage status (public/private), latitude, and longitude. (b) Airport ownership status: categorical, not ordinal. Airport usage status: categorical, not ordinal.
|
Latitude: numerical, continuous. Longitude: numerical, continuous. 1.13 (a) Population: all births, sample: 143,196 births between 1989 and 1993 in Southern California. (b) If births in this time span at the geography can be considered to be representative of all births, then the results are generalizable to the population of Southern California. However, since the study is observational the findings cannot be used to establish causal relationships. 1.15 (a) Population: all asthma patients aged 18-69 who rely on medication for asthma treatment. Sample: 600 such patients. (b) If the patients in this sample, who are likely not randomly sampled, can be considered to be representative of all asthma patients aged 18-69 who rely on medication for asthma treatment, then the results are generalizable to the population defined above. Additionally, since the study is experimental, the findings can be used to establish causal relationships. 1.17 (a) Observation. (c) Sample statistic (mean). (d) Population parameter (mean). (b) Variable. 1.19 (a) Observational. (b) Use stratified sampling to randomly sample a fixed number of students, say 10, from each section for a total sample size of 40 students. 1.21 (a) Positive, non-linear, somewhat strong. Countries in which a higher percentage of the population have access to the internet also tend to have higher average life expectancies, however rise in life expectancy trails off before around 80 years old. (b) Observational. (c) Wealth: countries with individuals who can widely afford the internet can probably also afford basic medical care. (Note: Answers may vary.) 1.23 (a) Simple random sampling is okay. In fact, it’s rare for simple random sampling to not be a reasonable sampling method! (b) The student opinions may vary by field of study, so the stratifying by this variable makes sense and would be reasonable. (c) Students of similar ages are probably going to have more similar opinions, and we want clusters to be diverse with respect to the outcome of interest, so this would not be a good approach. (Additional thought: the clusters in this case may also have very different numbers of people, which can also create unexpected sample sizes.) 4
|
86 APPENDIX A. EXERCISE SOLUTIONS 1.25 (a) The cases are 200 randomly sampled men and women. (b) The response variable is attitude towards a fictional microwave oven. (c) The explanatory variable is dispositional attitude. (d) Yes, the cases are sampled randomly. (e) This is an observational study since there is no random assignment to treatments. (f) No, we cannot establish a causal link between the explanatory and response variables since the study is observational. (g) Yes, the results of the study can be generalized to the population at large since the sample is random. 1.27 (a) Simple random sample. Non-response bias, if only those people who have strong opinions about the survey responds his sample may not be representative of the population. (b) Convenience sample. Under coverage bias, his sample may not be representative of the population since it consists only of his friends. It is also possible that the study will have non-response bias if some choose to not bring back the survey. (c) Convenience sample. This will have a similar issues to handing out surveys to friends. (d) Multi-stage sampling. If the classes are similar to each other with respect to student composition this approach should not introduce bias, other than potential non-response bias. 1.29 (a) Exam performance. (b) Light level: fluorescent overhead lighting, yellow overhead lighting, no overhead lighting (only desk lamps). (c) Sex: man, woman. 1.31 (a) Experiment. (b) Light level (overhead lighting, yellow overhead lighting, no overhead lighting) and noise level (no noise, construction noise, and human chatter noise). (c) Since the researchers want to ensure equal representation of graduate and undergraduate students, program type will be a blocking variable. 1.33 Need randomization and blinding. One possible outline: (1) Prepare two cups for each participant, one containing regular Coke and the other containing Diet Coke. Make sure the cups are identical and contain equal amounts of soda. Label the cups A (regular) and B (diet). (Be sure to randomize A and B for each trial!) (2) Give each participant the two cups, one cup at a time, in random order, and ask the participant to record a value that indicates how much she liked the beverage. Be sure that neither the participant nor the person handing out the cups
|
knows the identity of the beverage to make this a double-blind experiment. (Answers may vary.) 1.35 (a) Observational study. (b) Dog: Lucy. Cat: Luna. (c) Oliver and Lily. (d) Positive, as the popularity of a name for dogs increases, so does the popularity of that name for cats. 1.37 (a) Experiment. (b) Treatment: 25 grams of chia seeds twice a day, control: placebo. (c) Yes, gender. (d) Yes, single blind since the patients were blinded to the treatment they received. (e) Since this is an experiment, we can make a causal statement. However, since the sample is not random, the causal statement cannot be generalized to the population at large. 1.39 (a) Non-responders may have a different response to this question, e.g. parents who returned the surveys likely don’t have difficulty spending time with their children. (b) It is unlikely that the women who were reached at the same address 3 years later are a random sample. These missing responders are probably renters (as opposed to homeowners) which means that they might have a lower socio-economic status than the respondents. (c) There is no control group in this study, this is an observational study, and there may be confounding variables, e.g. these people may go running because they are generally healthier and/or do other exercises. controlled 1.41 (a) Randomized experiment. (b) Explanatory: treatment group (categorical, with 3 levels). Response variable: Psychological wellbeing. (c) No, because the participants were volunteers. (d) Yes, because it was an experiment. (e) The statement should say “evidence” instead of “proof”. 1.43 (a) County, state, driver’s race, whether the car was searched or not, and whether the driver was arrested or not. (b) All categorical, non-ordinal. (c) Response: whether the car was searched or not. Explanatory: race of the driver. 487 2 Summarizing data 2.1 (a) There is a weak and positive relationship between age and income. With so few points it is difficult to tell the form of the relationship (linear or not) however the relationship does look somewhat curved. (b
|
) (c) For males as age increases so does income, however this pattern is not apparent for females. 2.3 (a) 0 | 000003333333 0 | 7779 1 | 0011 Legend: (b) 1 | 0 = 10% (c) (d) 40% (Note: if using only rel. freq. histogram, you can only get an estimate because 7 is in the middle of the bin. Use the dot plot to get a more accurate answer.) 2.5 (a) Positive association: mammals with longer gestation periods tend to live longer as well. (b) Association would still be positive. (c) No, they are not independent. See part (a). 2.7 Both distributions are right skewed and bimodal with modes at 10 and 20 cigarettes; note that people may be rounding their answers to half a pack or a whole pack. The median of each distribution is between 10 and 15 cigarettes. The middle 50% of the data (the IQR) appears to be spread equally in each group and have a width of about 10 to 15. There are potential outliers above 40 cigarettes per day. It appears that respondents who smoke only a few cigarettes (0 to 5) smoke more on the weekdays than on weekends. 2.9 (a) ¯xamtW eekends = 20, ¯xamtW eekdays = 16. (b) samtW eekends = 0, samtW eekdays = 4.18. In this very small sample, higher on weekdays. 2.11 Any 10 employees whose average number of days off is between the minimum and the mean number of days off for the entire workforce at this plant. 2.13 (a) Dist 2 has a higher mean since 20 > 13, and a higher standard deviation since 20 is further from the rest of the data than 13. (b) Dist 1 has a higher mean since −20 > −40, and Dist 2 has a higher standard deviation since -40 is farther away from the rest of the data than -20. (c) Dist 2 has a higher mean since all values in this distribution are higher than those in Dist 1, but both distribution have the same standard deviation since they are equally variable around their respective means. (d) Both distributions have the same mean since they’re both centered at 300, but Dist 2 has a higher standard deviation since the observations are farther from
|
the mean than in Dist 1. AgeIncome2030405060020K40K60K80K100K120KMalesAgeIncome2030405060020K40K60K80K100K120KFemalesAgeIncome2030405060020K40K60K80K100K120Kfiber content (% of grams)0.000.020.040.060.080.10llllllllllllllllllllfiber content (% of grams)frequency0.000.020.040.060.080.100.1201234567fiber content (% of grams)relative frequency0.000.020.040.060.080.100.1200.10.20.3 488 APPENDIX A. EXERCISE SOLUTIONS 2.15 (a) About 30. (b) Since the distribution is right skewed the mean is higher than the median. (c) Q1: between 15 and 20, Q3: between 35 and 40, IQR: about 20. (d) Values that are considered to be unusually low or high lie more than 1.5×IQR away from the quartiles. Upper fence: Q3 + 1.5 × IQR = 37.5 + 1.5 × 20 = 67.5; Lower fence: Q1 - 1.5 × IQR = 17.5 + 1.5 × 20 = −12.5; The lowest AQI recorded is not lower than 5 and the highest AQI recorded is not higher than 65, which are both within the fences. Therefore none of the days in this sample would be considered to have an unusually low or high AQI. 2.17 The histogram shows that the distribution is bimodal, which is not apparent in the box plot. The box plot makes it easy to identify more precise values of observations outside of the whiskers. 2.19 (a) The distribution of number of pets per household is likely right skewed as there is a natural boundary at 0 and only a few people have many pets. Therefore the center would be best described by the median, and variability would be best described by the IQR. (b) The distribution of number of distance to work is likely right skewed as there is a natural boundary at 0 and only a few people live a very long distance from work. Therefore the center would be best described by the median, and variability would be best described by
|
the IQR. (c) The distribution of heights of males is likely symmetric. Therefore the center would be best described by the mean, and variability would be best described by the standard deviation. 2.21 (a) The median is a much better measure of the typical amount earned by these 42 people. The mean is much higher than the income of 40 of the 42 people. This is because the mean is an arithmetic average and gets affected by the two extreme observations. The median does not get effected as much since it is robust to outliers. (b) The IQR is a much better measure of variability in the amounts earned by nearly all of the 42 people. The standard deviation gets affected greatly by the two high salaries, but the IQR is robust to these extreme observations. 2.23 (a) The distribution is unimodal and symmetric with a mean of about 25 minutes and a standard deviation of about 5 minutes. There does not appear to be any counties with unusually high or low (b) Answers will vary. There mean travel times. are pockets of longer travel time around DC, Southeastern NY, Chicago, Minneapolis, Los Angeles, and many other big cities. There is also a large section of shorter average commute times that overlap with farmland in the Midwest. Many farmers’ homes are adjacent to their farmland, so their commute would be brief, which may explain why the average commute time for these counties is relatively low. 2.25 (a) 8.85%. (b) 6.94%. (c) 58.86%. (d) 4.56%. 2.27 (a) ZV R = 1.29, ZQR = 0.52. (b) She scored 1.29 standard deviations above the mean on the Verbal Reasoning section and 0.52 standard deviations above the mean on the Quantitative Reasoning section. (c) She did better on the Verbal Reasoning section since her Z-score on that (d) P ercV R = 0.9007 ≈ 90%, section was higher. P ercQR = 0.6990 ≈ 70%. (e) 100% − 90% = 10% did better than her on VR, and 100% − 70% = 30% did better than her on QR. (f) We cannot compare the raw scores since they are on different scales. Comparing her percentile scores is
|
more appropriate when comparing her performance to others. (g) Answer to part (b) would not change as Z-scores can be calculated for distributions that are not normal. However, we could not answer parts (c)-(e) since we cannot use the normal probability table to calculate probabilities and percentiles without a normal model. 2.29 (a) Z = 0.84, which corresponds to approximately 159 on QR. (b) Z = −0.52, which corresponds to approximately 147 on VR. 2.31 (a) Z = 1.2, P (Z > 1.2) = 0.1151. (b) Z = −1.28 → 70.6◦F or colder. 2.33 (a) Z = 1.08, P (Z > 1.08) = 0.1401. (b) The answers are very close because only the units were changed. (The only reason why they differ at all because 28◦C is 82.4◦F, not precisely 83◦F.) (c) Since IQR = Q3 − Q1, we first need to find Q3 and Q1 and take the difference between the two. Remember that Q3 is the 75th and Q1 is the 25th percentile of a distribution. Q1 = 23.13, Q3 = 26.86, IQR = 26. 86 - 23.13 = 3.73. (a)−1.350(b)01.48(c)0(d)−202VRZ = 1.29QRZ = 0.52 489 2.35 14/20 = 70% are within 1 SD. Within 2 SD: 19/20 = 95%. Within 3 SD: 20/20 = 100%. They follow this rule closely. 2.37 (a) We see the order of the categories and the relative frequencies in the bar plot. (b) There are no features that are apparent in the pie chart but not in the bar plot. (c) We usually prefer to use a bar plot as we can also see the relative frequencies of the categories in this graph. 2.39 The vertical locations at which the ideological groups break into the Yes, No, and Not Sure categories differ, which indicates that likelihood of supporting the DREAM act varies by political ideology. This suggests that the two
|
variables may be dependent. 7,979 227,571 ≈ 0.035 2.41 (a) (i) False. Instead of comparing counts, we should compare percentages of people in each group who suffered cardiovascular problems. (ii) True. (iii) False. Association does not imply causation. We cannot infer a causal relationship based on an observational study. The difference from part (ii) is subtle. (iv) True. (b) Proportion of all patients who had cardiovascular problems: (c) The expected number of heart attacks in the rosiglitazone group, if having cardiovascular problems and treatment were independent, can be calculated as the number of patients in that group multiplied by the overall cardiovascular problem rate in the study: 67, 593 ∗ 7,979 227,571 ≈ 2370. (d) (i) H0: The treatment and cardiovascular problems are independent. They have no relationship, and the difference in incidence rates between the rosiglitazone and pioglitazone groups is due to chance. HA: The treatment and cardiovascular problems are not independent. The difference in the incidence rates between the rosiglitazone and pioglitazone groups is not due to chance and rosiglitazone is associated with an increased risk of serious car(ii) A higher number of padiovascular problems. tients with cardiovascular problems than expected under the assumption of independence would provide support for the alternative hypothesis as this would suggest that rosiglitazone increases the risk of such problems. (iii) In the actual study, we observed 2,593 cardiovascular events in the rosiglitazone group. In the 1,000 simulations under the independence model, we observed somewhat less than 2,593 in every single simulation, which suggests that the actual results did not come from the independence model. That is, the variables do not appear to be independent, and we reject the independence model in favor of the alternative. The study’s results provide convincing evidence that rosiglitazone is associated with an increased risk of cardiovascular problems. 2.43 (a) Decrease: the new score is smaller than the mean of the 24 previous scores. (b) Calculate a weighted mean. Use a weight of 24 for the old mean and 1 for the new mean: (24 × 74 + 1 × 64)/(24 +
|
1) = 73.6. (c) The new score is more than 1 standard deviation away from the previous mean, so increase. 2.45 No, we would expect this distribution to be right skewed. There are two reasons for this: (1) there is a natural boundary at 0 (it is not possible to watch less than 0 hours of TV), (2) the standard deviation of the distribution is very large compared to the mean. 2.47 The distribution of ages of best actress winners are right skewed with a median around 30 years. The distribution of ages of best actress winners is also right skewed, though less so, with a median around 40 years. The difference between the peaks of these distributions suggest that best actress winners are typically younger than best actor winners. The ages of best actress winners are more variable than the ages of best actor winners. There are potential outliers on the higher end of both of the distributions. 2.49 (b) Z = 10−7.44 2.51 (a) Z = 5.5−7.44 1.33 = −1.49; P (Z < −1.49) = 0.068. Approximately 6.8% of the newborns were of 1.33 = 1.925. Uslow birth weight. ing a lower bound of 2 and an upper bound of 5, we get P (Z > 1.925) = 0.027. Approximately 2.7% of the newborns weighed over 10 pounds. (c) Approximately 2.7% of the newborns weighed over 10 pounds. Because there were 23,419 of them, about 0.027 × 23419 ≈ 632 weighed greater than 10 pounds. (d) Because we have the percentile, this is the inverse problem. To get the Z-score, use the inverse normal option with 0.90 to get Z = 1.28. Then solve for x in 1.28 = x−7.44 to get x = 9.15. To be at the 90th 1.33 percentile among this group, a newborn would have to weigh 9.15 pounds. 2.53 (a) 93.94%. (b) 93.53%. (c) 80.49 miles/hour. (d) 70.54%. l60708090Scores 490 APPENDIX A. EXERCISE SOLUTIONS 3 Probability (d) 0.11/0.33 ≈ 0.33
|
. nation. (b) 0.60 + 0.20 − 0.18 = 0.62. (c) 0.18/0.20 = 0.9. (e) No, otherwise the answers to (c) and (d) would be the same. (f) 0.06/0.34 ≈ 0.18. 3.17 (a) 0.3. (b) 0.3. (c) 0.3. (d) 0.3 × 0.3 = 0.09. (e) Yes, the population that is being sampled from is identical in each draw. 3.19 (a) 2/9 ≈ 0.22. (b) 3/9 ≈ 0.33. (c) 3 9 ≈ 0.067. (d) No, e.g. in this exercise, removing one chip meaningfully changes the probability of what might be drawn next. 3.21 P (1leggings, 2jeans, 3jeans) = 5 22 = 0.0173. However, the person with leggings could have come 2nd or 3rd, and these each have this same probability, so 3 × 0.0173 = 0.0519. 23 × 6 24 × 7 10 × 2 3.23 (a) 3.25 0.0714. Even when a patient tests positive for lupus, there is only a 7.14% chance that he actually has lupus. House may be right. (b) 0.84 3.27 (a) P(pass) = 0.5, but it should be 0.16. (b) P(pass) = 0.2, instead of 0.16. (c) P(pass) = 0.17, instead of 0.16. 3.1 (a) False. (b) False. There are red face cards. card cannot be both a face card and an ace. These are independent trials. (c) True. A 3.3 (a) 10 tosses. Fewer tosses mean more variability in the sample fraction of heads, meaning there’s a better chance of getting at least 60% heads. (b) 100 tosses. More flips means the observed proportion of heads would often be closer to the average, 0.50, and therefore also above 0.40. (c) 100 tosses. With
|
more flips, the observed proportion of heads would often be closer to the average, 0.50. (d) 10 tosses. Fewer flips would increase variability in the fraction of tosses that are heads. 3.5 (a) 0.510 = 0.00098. (b) 0.510 = 0.00098. (c) P (at least one tails) = 1 − P (no tails) = 1 − (0.510) ≈ 1 − 0.001 = 0.999. 3.7 (a) No, there are voters who are both independent and swing voters. (b) (c) Each Independent voter is either a swing voter or not. Since 35% of voters are Independents and 11% are both Independent and swing voters, the other 24% must not be swing voters. (d) 0.47. (e) 0.53. (f) P(Independent) × P(swing) = 0.35 × 0.23 = 0.08, which does not equal P(Independent and swing) = 0.11, so the events are dependent. 3.9 (a) If the class is not graded on a curve, they are independent. If graded on a curve, then neither independent nor disjoint – unless the instructor will only give one A, which is a situation we will ignore in parts (b) and (c). (b) They are probably not independent: if you study together, your study habits would be related, which suggests your course performances are also related. (c) No. See the answer to part (a) when the course is not graded on a curve. More generally: if two things are unrelated (independent), then one occurring does not preclude the other from occurring. 3.11 (a) 280/792 = 0.354. (b) 445/792 = 0.562. (c) 231/792 = 0.292, (d) 0.354 × 0.562 = 0.199 = 0.292. The events are not independent, so you cannot just multiply the unconditional probabilities. 3.13 (a) No, but we could if A and B are independent. (b-i) 0.21. (b-ii) 0.79. (b-iii) 0.3. (c) No, because 0.1 = 0.21, where 0.21 was the value computed under
|
independence from part (a). (d) 0.143. 3.15 (a) No, 0.18 of respondents fall into this combi- 241112IndependentSwingCan constructbox plots?Passed?yes, 0.8Yes, 0.860.8*0.86 = 0.688No, 0.140.8*0.14 = 0.112no, 0.2Yes, 0.650.2*0.65 = 0.13No, 0.350.2*0.35 = 0.07Lupus?Resultyes, 0.02positive, 0.980.02*0.98 = 0.0196negative, 0.020.02*0.02 = 0.0004no, 0.98positive, 0.260.98*0.26 = 0.2548negative, 0.740.98*0.74 = 0.7252 3.29 (a) Starting at row 3 of the random number table, we will read across the table two digits at a time. If the random number is between 00-15, the car will fail the pollution test. If the number is between 1699, the car will pass the test. (Answers may vary.) (b) Fleet 1: 18-52-97-32-85-95-29 → P-P-P-P-P-P-P → fleet passes Fleet 2: 14-96-06-67-17-49-59 → F-P-F-P-P-P-P → fleet fails Fleet 3: 05-33-67-97-58-11-81 → F-P-P-P-P-F-P → fleet fails Fleet 4: 23-81-83-21-71-08-50 → P-P-P-P-P-F-P → fleet fails Fleet 5: 82-84-39-31-83-14-34 → P-P-P-P-P-F-P → fleet fails estimate: 4 / 5 = 0.80 (c) P(at least one car fails in a fleet of seven) = 1 - P(no cars fail) = 1 − (0.84)7 = 0.705. 3.31 (a) Mean: $
|
3 ∗ 0.5 + $5 ∗ 0.3 + $10 ∗ 0.15 + $25 ∗ 0.05 = $5.75. (b) To compute the SD, it is easier to first compute the variance: (3 − 5.75)2 ∗ 0.5 + (5 − 5.75)2 ∗ 0.3 + (10 − 5.75)2 ∗ 0.15 + (25 − 5.75)2 ∗ 0.05 = 25.1875. The SD is then the square root of this value: $5.02. 3.33 (a) E(X) = 3.59. SD(X) = 9.64. (b) E(X) = -1.41. SD(X) = 9.64. (c) No, the expected net profit is negative, so on average you expect to lose money. 3.35 5% increase in value. 3.37 E = -0.0526. SD = 0.9986. 3.39 (a) Let X represent the amount of lemonade in the pitcher, Y represent the amount of lemonade in a glass, and W represent the amount left over after. Then, µW = E(X − Y ) = 64 − 12 = 52 (b) σW = √ SD(X)2 + SD(Y )2 = 4 = 2 = P (Z > −1) = (c) P (W > 50) = P Z > 50−52 1 − 0.1587 = 0.8413 1.7322 + 12 ≈ √ 2 6, though specifying such details would be necessary. 3.45 (a) 0.8752×0.125 = 0.096. (b) µ = 8, σ = 7.48. 491 3.47 If p is the probability of a success, then the mean of a Bernoulli random variable X is given by µ = E[X] = P (X = 0) × 0 + P (X = 1) × 1 = (1 − p.49 (a) 5 + 5 (d) 5 = 5. (b) 5 + 5 = 10 + 5 + 1 = 16. = 5. (c) 5 = 10. 3 1 4 3 4 5 3.51 (a)
|
Binomial conditions are met: (1) Independent trials: In a random sample, whether or not one 18-20 year old has consumed alcohol does not depend on whether or not another one has. (2) Fixed number of trials: n = 10. (3) Only two outcomes at each trial: Consumed or did not consume alcohol. (4) Probability of a success is the same for each trial: p = 0.697. (b) 0.203. (c) 0.203. (d) 0.167. (e) 0.997. 3.53 (a) 1 − 0.753 = 0.5781. (b) 0.1406. (c) 0.4219. (d) 1 − 0.253 = 0.9844. 3.55 (a) µ = 35, σ = 3.24 (b) Z = 45−35 3.24 = 3.09. 45 is more than 3 standard deviations away from the mean, we can assume that it is an unusual observation. Therefore yes, we would be surprised. (c) Using the normal approximation, 0.0010. With 0.5 correction, 0.0017. 3.57 (a) Invalid. Sum is greater than 1. (b) Valid. Probabilities are between 0 and 1, and they sum to 1. In this class, every student gets a C. (c) Invalid. Sum is less than 1. (d) Invalid. There is a negative probability. (e) Valid. Probabilities are between 0 and 1, and they sum to 1. (f) Invalid. There is a negative probability. 3.59 0.8247. 3.41 (a) The combined scores follow a normal distribution with µcombined = 304 and σcombined = 10.38. Then, P(combined score > 320) is approximately 0.06. (b) Z=1.28 (using calculator or table). Then we set 1.28 = x−304 10.38 and find x ≈ 317. 3.43 (a) No. The cards are not independent. For example, if the first card is an ace of clubs, that implies the second card cannot be an ace of clubs. Additionally, there are many possible categories, which would need to be simplified. (b) No. There are six events under consideration.
|
The Bernoulli distribution allows for only two events or categories. Note that rolling a die could be a Bernoulli trial if we simplify to two events, e.g. rolling a 6 and not rolling a 3.61 (a) E = $3.90. SD = $0.34. (b) E = $27.30. SD = $0.89. 3.63 (a) 13. (b) No, these 27 students are not a random sample from the university’s student population. For example, it might be argued that the proportion of smokers among students who go to the gym at 9 am on a Saturday morning would be lower than the proportion of smokers in the university as a whole. 3.65 0 wins (-$3): 0.1458. 1 win (-$1): 0.3936. 2 wins (+$1): 0.3543. 3 wins (+$3): 0.1063. HIV?Resultyes, 0.259positive, 0.9970.259*0.997 = 0.2582negative, 0.0030.259*0.003 = 0.0008no, 0.741positive, 0.0740.741*0.074 = 0.0548negative, 0.9260.741*0.926 = 0.6862 492 APPENDIX A. EXERCISE SOLUTIONS 4 Distributions of random variables 4.1 (a) Each observation in each of the distributions represents the sample proportion (ˆp) from samples of size n = 20, n = 100, and n = 500, respectively. (b) The centers for all three distributions are at 0.95, the true population parameter. When n is small, the distribution is skewed to the left and not smooth. As n increases, the variability of the distribution (standard deviation) decreases, and the shape of the distribution becomes more unimodal and symmetric. 4.3 (a) False. Doesn’t satisfy success-failure condition. (b) True. The success-failure condition is not satisfied. In most samples we would expect ˆp to be close to 0.08, the true population proportion. While ˆp can be much above 0.08, it is bound below by 0, suggesting it would take on a right skewed shape. Plotting the sampling distribution would con�
|
�rm this suspicion. (c) False. SE ˆp = 0.0243, and ˆp = 0.12 is only 0.12−0.08 0.0243 = 1.65 SEs away from the mean, which would not be considered unusual. (d) True. ˆp = 0.12 is 2.32 standard errors away from the mean, which is often considered unusual. (e) False. Decreases the SE by a factor of 1/ 4.5 (a) SD ˆp = p(1 − p)/n = 0.0707. This describes the typical distance that the sample proportion will deviate from the true proportion, p = 0.5. (b) ˆp approximately follows N (0.5, 0.0707). Z = (0.55 − 0.50)/0.0707 ≈ 0.71. This corresponds to an upper tail of about 0.2389. That is, P (ˆp > 0.55) ≈ 0.24. √ 2. 4.7 (a) First we need to check that the necessary conditions are met. There are 200 × 0.08 = 16 expected successes and 200 × (1 − 0.08) = 184 expected failures, therefore the success-failure condition is met. Then the binomial distribution can be approximated by N (µ = 16, σ = 3.84). P (X < 12) = P (Z < −1.04) = 0.1492. (b) Since the successfailure condition is met the sampling distribution for ˆp ∼ N (µ = 0.08, σ = 0.0192). P (ˆp < 0.06) = P (Z < −1.04) = 0.1492. (c) As expected, the two answers are the same. 4.9 The sampling distribution is the distribution of sample proportions from samples of the same size randomly sampled from the same population. As the same size increases, the shape of the sampling distribution (when p = 0.1) will go from being rightskewed to being more symmetric and resembling the normal distribution. With larger sample sizes, the spread of the sampling distribution gets smaller. Regardless of the sample size, the center of the sampling distribution is equal to the true mean of that population, provided the sampling isn
|
’t biased. 4.11 (a) The distribution is unimodal and strongly right skewed with a median between 5 and 10 years old. Ages range from 0 to slightly over 50 years old, and the middle 50% of the distribution is roughly between 5 and 15 years old. There are potential outliers on the higher end. (b) When the sample size is small, the sampling distribution is right skewed, just like the population distribution. As the sample size increases, the sampling distribution gets more unimodal, symmetric, and approaches normality. The variability also decreases. This is consistent with the Central Limit Theorem. 4.13 (a) Right skewed. There is a long tail on the higher end of the distribution but a much shorter tail on the lower end. (b) Less than, as the median would be less than the mean in a right skewed distribution. (c) We should not. (d) Even though the population distribution is not normal, the conditions for inference are reasonably satisfied, with the possible exception of skew. If the skew isn’t very strong (we should ask to see the data), then we can use the Central Limit Theorem to estimate this probability. For now, we’ll assume the skew isn’t very strong, though the description suggests it is at least moderate to 60): Z = 2.58 → strong. Use N (1.3, SD¯x = 0.3/ 0.0049. (e) It would decrease it by a factor of 1/ 2. √ √ 4.15 The centers are the same in each plot, and each data set is from a nearly normal distribution, though the histograms may not look very normal since each represents only 100 data points. The only way to tell which plot corresponds to which scenario is to examine the variability of each distribution. Plot B is the most variable, followed by Plot A, then Plot C. This means Plot B will correspond to the original data, Plot A to the sample means with size 5, and Plot C to the sample means with size 25. 4.17 (a) Z = −3.33 → 0.0004. (b) The population SD is known and the data are nearly normal, so the sample mean will be nearly normal with distribution N (µ, σ/ n), i.e. N (2.5, 0.0095). (c) Z = −
|
10.54 → ≈ 0. (d) See below: √ (e) We could not estimate (a) without a nearly normal population distribution. We also could not estimate (c) since the sample size is not sufficient to yield a nearly normal sampling distribution if the population distribution is not nearly normal. 2.412.442.472.502.532.562.59PopulationSampling (n = 10) 493 √ √ 4.25 (a) µ¯x1 = 15, σ¯x1 = 20/ 50 = 2.8284. √ (b) µ¯x2 = 20, σ¯x1 = 10/ 30 = 1.8257. (c) µ¯x2−¯x1 = 20/ + 10/ 20 − 15 = 5, σ¯x2−¯x1 = = 3.3665. (d) Think of ¯x1 and ¯x2 as being random variables, and we are considering the standard deviation of the difference of these two random variables, so we square each standard deviation, add them together, and then take the square root of the sum: 502 302 √ SD¯x2−¯x1 = SD2 ¯x2 + SD2 ¯x1 4.27 Want to find the probability that there will be 1,787 or more enrollees. Using the normal approximation, with µ = np = 2, 500 × 0.7 = 1750 √ and σ = np(1 − p) = 2, 500 × 0.7 × 0.3 ≈ 23, Z = 1.61, and P (Z > 1.61) = 0.0537. With a 0.5 correction: 0.0559. 4.29 Z = 1.56, P (Z > 1.56) = 0.0594, i.e. 6%. 4.31 This is the same as checking that the average bag weight of the 10 bags is greater than 46 lbs. SD¯x = 3.2√ 1.012 = 0.988; 10 P (z > 0.988) = 0.162 = 16.2%. = 1.012; z = 46−45 4.33 First we need to check that the necessary conditions are met. There are 100 × 0.360 = 36.0 expected successes and 100×(1
|
−0.360) = 64.0 expected failures, therefore the success-failure condition is met. Calculate using either (1) the normal approximation to the binomial distribution or (2) the sampling distribution of ˆp. (1) The binomial distribution can be approximated by N (µ = 0.360, σ = 4.8). P (X ≥ 35) = P (Z > −0.208) = 0.5823. (2) The sampling distribution of ˆp ∼ N (µ = 0.360, σ = 0.048). P (ˆp > 0.35) = P (Z > −0.208) = 0.5823. n = ˆp(1− ˆp) (e) Compute the SE using ˆp = 0.127 in place of p: 0.127(1−0.127) 212 = 0.023. (f) The SE ≈ standard error is the standard deviation of ˆp. A value of 0.10 would be about one standard error away from the observed value, which would not represent a very uncommon deviation. (Usually beyond about 2 standard errors is a good rule of thumb.) The engineer should not be surprised. (g) Recomputed standard 0.1(1−0.1) 212 = 0.021. This error using p = 0.1: SE = value isn’t very different, which is typical when the standard error is computed using relatively similar proportions (and even sometimes when those proportions are quite different!). 4.19 (a) We cannot use the normal model for this calculation, but we can use the histogram. About 500 songs are shown to be longer than 5 minutes, so the probability is about 500/3000 = 0.167. (b) Two different answers are reasonable. Option 1Since the population distribution is only slightly skewed to the right, even a small sample size will yield a nearly normal sampling distribution. We also know that the songs are sampled randomly and the sample size is less than 10% of the population, so the length of one song in the sample is independent of another. We are looking for the probability that the total length of 15 songs is more than 60 minutes, which means that the average song should last at least 60/15 = 4 min√ 15, Z
|
= 1.31 → 0.0951. utes. Using SD¯x = 1.63/ Option 2Since the population distribution is not normal, a small sample size may not be sufficient to yield a nearly normal sampling distribution. Therefore, we cannot estimate the probability using the tools we (c) We can now be confident have learned so far. that the conditions are satisfied. Z = 0.92 → 0.1788. 4.21 (a) SD¯x = 25√ = 2.89. (b) Z = 1.73, which 75 indicates that the two values are not unusually distant from each other when accounting for the uncertainty in John’s point estimate. 4.23 (a) µ ˆpN E = 0.01. σ ˆpN E = 0.0031. (b) µ ˆpN Y = 0.06. σ ˆpN Y = 0.0075. (c) µ ˆpN Y − ˆpN B = 0.06 − 0.01 = 0.05. σ ˆpN Y − ˆpN B = 0.0081. (d) We can think of ˆpN E and ˆpN Y as being random variables, and we are considering the standard deviation of the difference of these two random variables, so we square each standard deviation, add them together, and then take the square root of the sum: SD ˆpN Y − ˆpN E = SD2 ˆpN Y + SD2 ˆpN E 5 Foundations for inference 5.1 (a) Mean. Each student reports a numerical value: a number of hours. (b) Mean. Each student reports a number, which is a percentage, and we can average over these percentages. (c) Proportion. Each student reports Yes or No, so this is a categorical variable and we use a proportion. (d) Mean. Each student reports a number, which is a percentage like in part (b). (e) Proportion. Each student reports whether or not s/he expects to get a job, so this is a categorical variable and we use a proportion. 5.3 (a) The sample is from all computer chips manufactured
|
at the factory during the week of production. We might be tempted to generalize the population to represent all weeks, but we should exercise caution here since the rate of defects may change over time. (b) The fraction of computer chips manufactured at the factory during the week of production that had defects. (c) Estimate the parameter using the data: ˆp = 27 (d) Standard error (or SE). 212 = 0.127. 494 APPENDIX A. EXERCISE SOLUTIONS (3) The alternative hypothesis should have a notequals sign, and (4) it should reference the null value, p0 = 0.6, not the observed sample proportion. The correct way to set up these hypotheses is: H0 : p = 0.6 and HA : p = 0.6. 5.17 (a) H0 : punemp = punderemp: The proportions of unemployed and underemployed people who are having relationship problems are equal. HA : punemp = punderemp: The proportions of unemployed and underemployed people who are having relationship problems are different. (b) If in fact the two population proportions are equal, the probability of observing at least a 2% difference between the sample proportions is approximately 0.35. Since this is a high probability we fail to reject the null hypothesis. The data do not provide convincing evidence that the proportion of of unemployed and underemployed people who are having relationship problems are different. 5.19 (a) H0: Anti-depressants do not affect the symptoms of Fibromyalgia. HA: Anti-depressants do affect the symptoms of Fibromyalgia (either helping (b) Concluding that anti-depressants or harming). either help or worsen Fibromyalgia symptoms when they actually do neither. (c) Concluding that antidepressants do not affect Fibromyalgia symptoms when they actually do. 5.21 (a) False. Confidence intervals provide a range of plausible values, and sometimes the truth is missed. A 95% confidence interval “misses” about 5% of the time. (b) True. Notice that the description focuses on the true population value. (c) True. The 95% confidence interval is given by: (42.
|
6%, 47.4%), and we can see that 50% is outside of this interval. This means that in a hypothesis test, we would reject the null hypothesis that the proportion is 0.5. (d) False. The standard error describes the uncertainty in the overall estimate from natural fluctuations due to randomness, not the uncertainty corresponding to individuals’ responses. 5.5 (a) Sampling distribution. (b) If the population proportion is in the 5-30% range, the success-failure condition would be satisfied and the sampling distribution would be symmetric. (c) We use the standard error to describe the variability: SE = 0.08(1−0.08) 800 = 0.0096. (d) Standard error. (e) The distribution will tend to be more variable when we have fewer observations per sample. n = p(1−p) the that First, general 5.7 Recall formula is point estimate ± z × SE. identify the three different values. The point estimate is 45%, z = 1.96 for a 95% confidence level, and SE = 1.2%. Then, plug the values into the formula: 45% ± 1.96 × 1.2% → (42.6%, 47.4%) We are 95% confident that the proportion of US adults who live with one or more chronic conditions is between 42.6% and 47.4%. 5.9 (a) False. Inference is made on the population parameter, not the point estimate. The point estimate is always in the confidence interval. (b) True. (c) False. The confidence interval is not about a sample mean. (d) False. To be more confident that we capture the parameter, we need a wider interval. Think about needing a bigger net to be more sure of catching a fish in a murky lake. (e) True. Optional explanation: This is true since the normal model was used to model the sample mean. The margin of error is half the width of the interval, and the sample mean is the midpoint of the interval. (f) False. In the calculation of the standard error, we divide the standard deviation by the square root of the sample size. To cut the SE (or margin of error) in half
|
, we would need to sample 22 = 4 times the number of people in the initial sample. 5.11 (a) This claim is reasonable, since the entire interval lies above 50%. (b) The value of 70% lies outside of the interval, so we have convincing evidence that the researcher’s conjecture is wrong. (c) A 90% confidence interval will be narrower than a 95% confidence interval. Even without calculating the interval, we can tell that 70% would not fall in the interval, and we would reject the researcher’s conjecture based on a 90% confidence level as well. 5.13 (a) H0 : p = 0.5 (Neither a majority nor minority of students’ grades improved) HA : p = 0.5 (Either a majority or a minority of students’ grades improved) (b) H0 : µ = 15 (The average amount of company time each employee spends not working is 15 minutes for March Madness.) HA : µ = 15 (The average amount of company time each employee spends not working is different than 15 minutes for March Madness.) 5.15 (1) The hypotheses should be about the population proportion (p), not the sample proportion. (2) The null hypothesis should have an equal sign. 495 5.23 (a) H0: The restaurant meets food safety and sanitation regulations. HA: The restaurant does not meet food safety and sanitation regulations. (b) The food safety inspector concludes that the restaurant does not meet food safety and sanitation regulations and shuts down the restaurant when the restaurant is actually safe. (c) The food safety inspector concludes that the restaurant meets food safety and sanitation regulations and the restaurant stays open when the restaurant is actually not safe. (d) A Type 1 Error may be more problematic for the restaurant owner since his restaurant gets shut down even though it meets the food safety and sanitation regulations. (e) A Type 2 Error may be more problematic for diners since the restaurant deemed safe by the inspector is actually not. (f) Strong evidence. Diners would rather a restaurant that meet the regulations get shut down than a restaurant that doesn’t meet the regulations not get shut down. 5.25 True. If the sample size gets ever larger, then the standard error will become ever smaller. Eventually, when the sample size is large enough and the standard error is tiny, we can find statistically
|
significant yet very small differences between the null value and point estimate (assuming they are not exactly equal). 6 Inference for categorical data 6.1 (a) True. See the reasoning of 6.1(b). (b) True. We take the square root of the sample size in the SE formula. (c) True. The independence and successfailure conditions are satisfied. (d) True. The independence and success-failure conditions are satisfied. 6.3 (a) False. A confidence interval is constructed to estimate the population proportion, not the sample proportion. (b) True. 95% CI: 82% ± 2%. (c) True. By the definition of the confidence level. (d) True. Quadrupling the sample size decreases the SE and 4. (e) True. The 95% CI is ME by a factor of 1/ entirely above 50%. √ 6.5 With a random sample, independence is satisfied. The success-failure condition is also satisfied. M E = z ˆp(1− ˆp) = 0.0397 ≈ 4% 0.56×0.44 600 n = 1.96 6.7 (a) No. The sample only represents students who took the SAT, and this was also an online survey. (b) (0.5289, 0.5711). We are 90% confident that 53% to 57% of high school seniors who took the SAT are fairly certain that they will participate in a study abroad program in college. (c) 90% of such random samples would produce a 90% confidence in(d) Yes. terval that includes the true proportion. The interval lies entirely above 50%. 6.9 (a) We want to check for a majority (or minority), so we use the following hypotheses: H0 : p = 0.5 HA : p = 0.5 We have a sample proportion of ˆp = 0.55 and a sample size of n = 617 independents. Since this is a random sample, independence is satisfied. The success-failure condition is also satisfied: 617 × 0
|
.5 and 617 × (1 − 0.5) are both at least 10 (we use the null proportion p0 = 0.5 for this check in a one-proportion hypothesis test). Therefore, we can model ˆp using a normal distribution with a standard error of SE = p(1 − p) n = 0.02 (We use the null proportion p0 = 0.5 to compute the standard error for a one-proportion hypothesis test.) Next, we compute the test statistic: Z = 0.55 − 0.5 0.02 = 2.5 This yields a one-tail area of 0.0062, and a p-value of 2 × 0.0062 = 0.0124. Because the p-value is smaller than 0.05, we reject the null hypothesis. We have strong evidence that the support is different from 0.5, and since the data provide a point estimate above 0.5, we have strong evidence to support this claim by the TV pundit. (b) No. Generally we expect a hypothesis test and a confidence interval to align, so we would expect the confidence interval to show a range of plausible values entirely above 0.5. However, if the confidence level is misaligned (e.g. a 99% confidence level and a α = 0.05 significance level), then this is no longer generally true. 6.11 (a) Identify: H0: p = 0.5. HA: p > 0.5. α = 0.05. Choose: 1-proportion Z-test. Check: Independence (random sample, < 10% of population) is satisfied, as is the success-failure conditions (using p0 = 0.5, we expect 40 successes and 40 failures). Calculate: Z = 2.91 → p- value = 0.0018. Conclude: Since the p-value < 0.05, we reject the null hypothesis. The data provide strong evidence that the rate of correctly identifying a soda for these people is significantly better than just by random guessing. (b) The p-value represents the following conditional probability: P (ˆp > 0.6625 | p = 0.5). If in fact people cannot tell the
|
difference between diet and regular soda and they randomly guess, the probability of getting a random sample of 80 people where 66.25% (53/80) or higher identify a soda correctly would be 0.0018. 496 APPENDIX A. EXERCISE SOLUTIONS low, we reject H0. There is strong evidence of a difference in the rates of autism of children of mothers who did and did not use prenatal vitamins during the first three months before pregnancy. (b) The title of this newspaper article makes it sound like using prenatal vitamins can prevent autism, which is a causal statement. Since this is an observational study, we cannot make causal statements based on the findings of the study. A more accurate title would be “Mothers who use prenatal vitamins before pregnancy are found to have children with a lower rate of autism” 6.25 (a) False. The chi-square distribution has one parameter called degrees of freedom. (b) True. (c) True. (d) False. As the degrees of freedom increases, the shape of the chi-square distribution becomes more symmetric. 6.27 (a) H0: The distribution of the format of the book used by the students follows the professor’s predictions. HA: The distribution of the format of the book used by the students does not follow the professor’s predictions. (b) Ehard copy = 126 × 0.60 = 75.6. Eprint = 126 × 0.25 = 31.5. Eonline = 126 × 0.15 = 18.9. (c) Independence: The sample is not random. However, if the professor has reason to believe that the proportions are stable from one term to the next and students are not affecting each other’s study habits, independence could be reasonable. Sample size: All expected counts are at least 5. (d) χ2 = 2.32, df = 2, p-value = 0.313. (e) Since the p-value is large, we fail to reject H0. The data do not provide strong evidence indicating the professor’s predictions were statistically inaccurate. 6.29 (a) Two-way table: Treatment Patch + support group Only patch Total Quit Yes 40 30 70 No 110 120 230 Total 150 150 300 (b-i) Erow1,col1 = (row 1 total)×(col
|
1 total) table total is lower than the observed value. (b-ii) Erow2,col2 = (row 2 total)×(col 2 total) This is lower than the observed value. table total = 35. This = 115. 6.13 Because a sample proportion (ˆp = 0.55) is available, we use this for the sample size calculations. The margin of error for a 90% confidence interval is 1.6449 × SE = 1.6449 × be less than 0.01, where we use ˆp in place of p:. We want this to p(1−p) n 1.6449 × 0.55(1 − 0.55) n 1.64492 0.55(1 − 0.55) 0.012 ≤ 0.01 ≤ n From this, we get that n must be at least 6697. 6.15 This is not a randomized experiment, and it is unclear whether people would be affected by the behavior of their peers. That is, independence may not hold. Additionally, there are only 5 interventions under the provocative scenario, so the successfailure condition does not hold. Even if we consider a hypothesis test where we pool the proportions, the success-failure condition will not be satisfied. Since one condition is questionable and the other is not satisfied, the difference in sample proportions will not follow a nearly normal distribution. 6.17 (a) False. The entire confidence interval is above 0. (b) True. (c) True. (d) True. (e) False. It is simply the negated and reordered values: (-0.06,0.02). 6.19 (a) Standard error: SE = 0.79(1 − 0.79) 347 + 0.55(1 − 0.55) 617 = 0.03 Using z = 1.96, we get: 0.79 − 0.55 ± 1.96 × 0.03 → (0.181, 0.299) We are 95% confident the proportion of Democrats who support the plan is 18.1% to 29.9% higher than the proportion of Independents who support the plan. (b) True. that 6.21 Identify: Subscript C means control group. Sub
|
script T means truck drivers. H0 : pC = pT. HA : pC = pT. α = 0.05. Choose: 2-proportion Ztest. Check: Independence is satisfied (random samples that are independent), as is the success-failure condition, which we check using the pooled proportion (ˆppool = 70/495 = 0.141). Calculate: Z = −1.65 → p-value = 0.0989. Conclude: Since the p-value is > α, we fail to reject H0. The data do not provide strong evidence that the rates of sleep deprivation are different for non-transportation workers and truck drivers. 6.23 (a) Subscript V means vitamin group. Subscript N V means no vitamin group. H0 : pV = pN V. H0 : pV = pN V. Independence is satisfied (random samples, < 10% of the population), as is the success-failure condition, which we would check using the pooled proportion (ˆppool = 254/483 = 0.53). Z = 2.99 → p-value = 0.0028. Since the p-value is 6.31 Identify: H0: Opinions regarding offshore drilling for oil and having a college degree are independent. HA: Opinions regarding offshore drilling for oil and having a college degree are dependent. α = 0.05. Choose: chi-square test for independence Check: Erow 1,col 1 = 151.5 Erow 2,col 1 = 162.1 Erow 3,col 1 = 124.5 Erow 1,col 2 = 134.5 Erow 2,col 2 = 143.9 Erow 3,col 2 = 110.5 Independence: The sample is random, and from less than 10% of the population, so independence between observations is reasonable. Expected counts: All expected counts are at least 5. Calculate: χ2 = 11.47, df = 2 → p-value = 0.003. Conclude: Since the pvalue < α, we reject H0. There is strong evidence that there is an association between support for offshore drilling and having a college degree. 6.33 No. The samples at the beginning and at the end of the semester are not independent since
|
the survey is conducted on the same students. 6.35 (a) H0: The age of Los Angeles residents is independent of shipping carrier preference variable. HA: The age of Los Angeles residents is associated with the shipping carrier preference variable. (b) The conditions are not satisfied since some expected counts are below 5. 6.37 (a) Independence is satisfied (random sample), as is the success-failure condition (40 smokers, 160 non-smokers). The 95% CI: (0.145, 0.255). We are 95% confident that 14.5% to 25.5% of all students at this university smoke. (b) We want zSE to be no larger than 0.02 for a 95% confidence level. We use z = 1.96 and plug in the point estimate ˆp = 0.2 within the SE formula: 1.960.2(1 − 0.2)/n ≤ 0.02. The sample size n should be at least 1,537. 6.39 (a) Proportion of graduates from this university who found a job within one year of graduating. ˆp = 348/400 = 0.87. (b) This is a random sample, so the observations are independent. Success-failure condition is satisfied: 348 successes, 52 failures, both 497 well above 10. (c) (0.8371, 0.9029). We are 95% confident that approximately 84% to 90% of graduates from this university found a job within one year of completing their undergraduate degree. (d) 95% of such random samples would produce a 95% confidence interval that includes the true proportion of students at this university who found a job within one year of graduating from college. (e) (0.8267, 0.9133). Similar interpretation as before. (f) 99% CI is wider, as we are more confident that the true proportion is within the interval and so need to cover a wider range. 6.41 Use a chi-square goodness of fit test. H0: Each option is equally likely. HA: Some options are preferred over others. Total sample size: 99. Expected counts: (1/3) * 99 = 33 for each option.
|
These are all above 5, so conditions are satisfied. df = 3 − 1 = 2 + (21−33)2 and χ2 = (43−33)2 = 7.52 → 33 p-value = 0.023. Since the p-value is less than 5%, we reject H0. The data provide convincing evidence that some options are preferred over others. + (35−33)2 33 33 6.43 (a) H0 : p = 0.38. HA : p = 0.38. Independence (random sample) and the success-failure condition are satisfied. Z = −20.5 → p-value ≈ 0. Since the p-value is very small, we reject H0. The data provide strong evidence that the proportion of Americans who only use their cell phones to access the internet is different than the Chinese proportion of 38%, and the data indicate that the proportion is lower in the US. (b) If in fact 38% of Americans used their cell phones as a primary access point to the internet, the probability of obtaining a random sample of 2,254 Americans where 17% or less or 59% or more use their only their cell phones to access the internet would be approximately 0. (c) (0.1545, 0.1855). We are 95% confident that approximately 15.5% to 18.6% of all Americans primarily use their cell phones to browse the internet. 6.45 (a) Since there are 3 independent random samples here, we do a test for homogeneity. df = 2. (b) Goodness of fit test, df = 2. (c) Goodness of fit test, df = 4. 498 APPENDIX A. EXERCISE SOLUTIONS 7 Inference for numerical data 7.1 (a) df = 6 − 1 = 5, t 5 = 2.02 (column with two tails of 0.10, row with df = 5). (b) df = 21 − 1 = 20, t 20 = 2.53 (column with two tails of 0.02, row with df = 20). (d) df = 11, t 11 = 3.11. (c) df = 28, t 28 = 2.05. 7.3 (a) 0.085, do not reject H0. (b
|
) 0.003, reject H0. (c) 0.438, do not reject H0. (d) 0.042, reject H0. 7.5 The mean is the midpoint: ¯x = 20. Identify the margin of error: M E = 1.015, then use t 35 = 2.03 n in the formula for margin of error and SE = s/ to identify s = 3. √ 7.7 (a) H0: µ = 8 (New Yorkers sleep 8 hrs per night on average.) HA: µ = 8 (New Yorkers sleep less or more than 8 hrs per night on average.) (b) Independence: The sample is random. The min/max suggest there are no concerning outliers. T = −1.75. df = 25 − 1 = 24. (c) p-value = 0.093. If in fact the true population mean of the amount New Yorkers sleep per night was 8 hours, the probability of getting a random sample of 25 New Yorkers where the average amount of sleep is 7.73 hours per night or less (or 8.27 hours or more) is 0.093. (d) Since p-value > 0.05, do not reject H0. The data do not provide strong evidence that New Yorkers sleep more or less than 8 hours per night on average. (e) Yes, since we did not rejected H0. 7.9 T is either -2.09 or 2.09. Then ¯x is one of the following: pothesis and the null value of 5 was in the t-interval. 7.13 If the sample is large, then the margin of error √ n. We want this value to will be about 1.96 × 100/ be less than 10, which leads to n ≥ 384.16, meaning we need a sample size of at least 385 (round up for sample size calculations!). 7.15 Paired, data are recorded in the same cities at two different time points. The temperature in a city at one point is not independent of the temperature in the same city at another time point. 7.17 (a) Since it’s the same students at the beginning and the end of the semester, there is a pairing between the datasets, for a given student their beginning and end of semester grades are dependent. (b) Since the subjects were sampled randomly, each observation in the men’
|
s group does not have a special correspondence with exactly one observation in the other (women’s) group. (c) Since it’s the same subjects at the beginning and the end of the study, there is a pairing between the datasets, for a subject student their beginning and end of semester artery thickness are dependent. (d) Since it’s the same subjects at the beginning and the end of the study, there is a pairing between the datasets, for a subject student their beginning and end of semester weights are dependent. −2.09 = 2.09 = ¯x − 60 8√ 20 ¯x − 60 8√ 20 → ¯x = 56.26 → ¯x = 63.74 √ 7.11 (a) Identify: H0: µ = 5. HA: µ < 5. We’ll use α = 0.05. Choose: 1-sample t-test. Check: This is a random sample, so the observations are independent. To proceed, we assume the distribution of years of piano lessons is approximately normal. Calculate: SE = 2.2/ 20 = 0.4919. The test statistic is T = (4.6 − 5)/SE = −0.81. df = 20 − 1 = 19. The one-tail p-value is about 0.21. Conclude: p-value > α = 0.05, so we do not reject H0. That is, we do not have sufficiently strong evidence to reject Georgianna’s claim. (b) Identify: estimate average hours a child takes piano lessons in this city with 95% confidence. Choose: 1-sample t-interval. Check: same as in part (a). Calculate: Using SE = 0.4919 and t df =19 = 2.093, the confidence interval is (3.57, 5.63). Conclude: We are 95% confident that the average number of years a child takes piano lessons in this city is 3.57 to 5.63 years. Do not have evidence that average is not 5 because 5 is in the interval. (c) They agree, since we did not reject the null hy- 7.19 (a) For each observation in one data set, there is exactly one specially corresponding observation in the other data set for the same geographic location. The
|
data are paired. (b) H0 : µdiff = 0 (There is no difference in average number of days exceeding 90°F in 1948 and 2018 for NOAA stations.) HA : µdiff = 0 (There is a difference.) (c) Locations were randomly sampled, so independence is reasonable. The sample size is at least 30, so we’re just looking for particularly extreme outliers: none are present (the observation off left in the histogram would be considered a clear outlier, but not a particularly extreme one). Therefore, the conditions are satisfied. 1.23 = 2.36 with (d) SE = 17.2/ degrees of freedom df = 197 − 1 = 196. This leads to a one-tail area of 0.0096 and a p-value of about 0.019. (e) Since the p-value is less than 0.05, we reject H0. The data provide strong evidence that NOAA stations observed more 90°F days in 2018 than in 1948. (f) Type 1 Error, since we may have incorrectly rejected H0. This error would mean that NOAA stations did not actually observe a decrease, but the sample we took just so happened to make it appear that this was the case. (g) No, since we rejected H0, which had a null value of 0. 197 = 1.23. T = 2.9−0 √ 7.21 Identify: we want to estimate the average difference in number of days exceeding 90°F for (2018 1948) with 90% confidence. Choose: 1-sample tinterval with paired data. Check: ndif f = 197 ≥ 30 and the locations are randomly sampled. Calculate: t ≈ 1.65. average SEdif f = 1.23 and df = 196. 2.9 ± 1.65 × 1.23 → (0.87, 4.93). Conclude: We are 90% confident that there was an increase of 0.87 to 4.93 in the average difference of days that hit 90°F in 2018 relative to 1948 for NOAA stations. We have evidence that the average difference of days that hit 90°F increased, because the interval is entirely above 0. 7.23 Identify:H0: µ0
|
.99 = µ1 and HA: µ0.99 = µ1; Let α = 0.05. Choose: 2-sample t-test. Check: Independence: Both samples are random and represent less than 10% of their respective populations. Also, we have no reason to think that the 0.99 carats are not independent of the 1 carat diamonds since they are both sampled randomly. Normal populations: The sample distributions are not very skewed, hence we find it reasonable that the underlying population distributions are nearly normal. Calculate: T = −2.82, df = 42.5, p-value = 0.007. Conclude: Since p-value < 0.05, reject H0. The data provide convincing evidence that the average standardized price of 0.99 carats and 1 carat diamonds are different. 7.25 (a) Chicken fed linseed weighed an average of 218.75 grams while those fed horsebean weighed an average of 160.20 grams. Both distributions are relatively symmetric with no apparent outliers. There is 499 more variability in the weights of chicken fed linseed. (b) H0 : µls = µhb. HA : µls = µhb. We leave the conditions to you to consider. T = 3.02, df = min(11, 9) = 9 → p-value = 0.014. Since p-value < 0.05, reject H0. The data provide strong evidence that there is a significant difference between the average weights of chickens that were fed linseed and horsebean. (c) Type 1 Error, since we rejected H0. (d) Yes, since p-value > 0.01, we would not have rejected H0. 7.27 H0 : µC = µS. HA : µC = µS. T = 3.27, df = 11 → p-value = 0.007. Since p-value < 0.05, reject H0. The data provide strong evidence that the average weight of chickens that were fed casein is different than the average weight of chickens that were fed soybean (with weights from casein being higher). Since this is a randomized experiment, the observed difference can be attributed to the diet. 7.29 Let µdif f = µpre−post. H0 : µdif f = 0: Treatment
|
has no effect. HA : µdif f = 0: Treatment has an effect on P.D.T. scores, either positive or negative. Conditions: The subjects are randomly assigned to treatments, so independence within and between groups is satisfied. All three sample sizes are smaller than 30, so we look for clear outliers. There is a borderline outlier in the first treatment group. Since it is borderline, we will proceed, but we should report this caveat with any results. For all three groups: df = 13. T1 = 1.89 → p-value = 0.081, T2 = 1.35 → p-value = 0.200), T3 = −1.40 → (p-value = 0.185). We do not reject the null hypothesis for any of these groups. As earlier noted, there is some uncertainty about if the method applied is reasonable for the first group. 7.31 H0 : µT = µC. HA : µT = µC. T = 2.24, df = 21 → p-value = 0.036. Since p-value < 0.05, reject H0. The data provide strong evidence that the average food consumption by the patients in the treatment and control groups are different. Furthermore, the data indicate patients in the distracted eating (treatment) group consume more food than patients in the control group. 7.33 False. While it is true that paired analysis requires equal sample sizes, only having the equal sample sizes isn’t, on its own, sufficient for doing a paired test. Paired tests require that there be a special correspondence between each pair of observations in the two groups. 7.35 (a) We are building a distribution of sample statistics, in this case the sample mean. Such a distribution is called a sampling distribution. (b) Because we are dealing with the distribution of sample means, we need to check to see if the Central Limit Theorem applies. Our sample size is greater than 30, and we are told that random sampling is employed. With these conditions met, we expect that the dis- 500 APPENDIX A. EXERCISE SOLUTIONS tribution of the sample mean will be nearly normal and therefore symmetric. (c) Because we are dealing with a sampling distribution, we measure its variabil45 = 2.713.
|
ity with the standard error. SE = 18.2/ (d) The sample means will be more variable with the smaller sample size. √ 7.37 Independence: it is a random sample, so we can assume that the students in this sample are independent of each other with respect to number of exclusive relationships they have been in. Notice that there are no students who have had no exclusive relationships in the sample, which suggests some student responses are likely missing (perhaps only positive values were reported). The sample size is at least 30, and there are no particularly extreme outliers, so the normality condition is reasonable. 90% CI: (2.97, 3.43). We are 90% confident that undergraduate students have been in 2.97 to 3.43 exclusive relationships, on average. 7.39 First, the hypotheses should be about the population mean (µ), not the sample mean. Second, the null hypothesis should have an equal sign and the alternative hypothesis should be about the null hypothesized value, not the observed sample mean. The correct way to set up these hypotheses is shown below: H0 : µ = 10 hours HA : µ = 10 hours A two-sided test allows us to consider the possibility that the data show us something that we would find 8 Introduction to linear regression 8.1 (a) The residual plot will show randomly distributed residuals around 0. The variance is also approximately constant. (b) The residuals will show a fan shape, with higher variability for smaller x. There will also be many points on the right above the line. There is trouble with the model being fit here. 8.3 (a) Strong relationship, but a straight line would not fit the data. (b) Strong relationship, and a linear fit would be reasonable. (c) Weak relationship, and trying a linear fit would be reasonable. (d) Moderate relationship, but a straight line would not fit the data. (e) Strong relationship, and a linear fit would be reasonable. (f) Weak relationship, and trying a linear fit would be reasonable. 8.5 (a) Exam 2 since there is less of a scatter in the plot of final exam grade versus exam 2. Notice that the relationship between Exam 1 and the Final Exam appears to be slightly nonlinear. (b) Exam 2 and the �
|
�nal are relatively close to each other chronologically, or Exam 2 may be cumulative so has greater similarities in material to the final exam. Answers may vary. surprising. 7.41 (a) These data are paired. For example, the Friday the 13th in say, September 1991, would probably be more similar to the Friday the 6th in September 1991 than to Friday the 6th in another month or year. (b) Let µdiff = µsixth − µthirteenth. H0 : µdiff = 0. HA : µdiff = 0. (c) Independence: The months selected are not random. However, if we think these dates are roughly equivalent to a simple random sample of all such Friday 6th/13th date pairs, then independence is reasonable. To proceed, we must make this strong assumption, though we should note this assumption in any reported results. Normality: With fewer than 10 observations, we would need to see clear outliers to be concerned. There is a borderline outlier on the right of the histogram of the differences, so we would want to report this in formal analysis results. (d) T = 4.93 for df = 10 − 1 = 9 → p-value = 0.001. (e) Since p-value < 0.05, reject H0. The data provide strong evidence that the average number of cars at the intersection is higher on Friday the 6th than on Friday the 13th. (We should exercise caution about generalizing the interpetation to all intersections or roads.) (f) If the average number of cars passing the intersection actually was the same on Friday the 6th and 13th, then the probability that we would observe a test statistic so far from zero is less than 0.01. (g) We might have made a Type 1 Error, i.e. incorrectly rejected the null hypothesis. 8.7 (a) r = −0.7 → (4). (c) r = 0.06 → (1). (d) r = 0.92 → (2). (b) r = 0.45 → (3). 8.9 (a) The relationship is positive, weak, and possibly linear. However, there do appear to be some anomalous observations along the left where several students have the same height that is notably far from the cloud of the other points. Additionally, there are many
|
students who appear not to have driven a car, and they are represented by a set of points along the bottom of the scatterplot. (b) There is no obvious explanation why simply being tall should lead a person to drive faster. However, one confounding factor is gender. Males tend to be taller than females on average, and personal experiences (anecdotal) may suggest they drive faster. If we were to follow-up on this suspicion, we would find that sociological studies confirm this suspicion. (c) Males are taller on average and they drive faster. The gender variable is indeed an important confounding variable. 501 (¯x, ¯y): ¯y = a+b× ¯x. Plug in ¯x, ¯y, and b, and solve for a: 51. Solution: travel time = 51 + 0.726 × distance. (b) b: For each additional mile in distance, the model predicts an additional 0.726 minutes in travel a: When the distance traveled is 0 miles, time. the travel time is expected to be 51 minutes. It does not make sense to have a travel distance of 0 miles in this context. Here, the y-intercept serves only to adjust the height of the line and is mean(c) R2 = 0.6362 = 0.40. About ingless by itself. 40% of the variability in travel time is accounted for by the model, i.e. explained by the distance traveled. (d) travel time = 51 + 0.726 × distance = 51 + 0.726 × 103 ≈ 126 minutes. (Note: we should be cautious in our predictions with this model since we have not yet evaluated whether it is a well-fit model.) (e) ei = yi − ˆyi = 168 − 126 = 42 minutes. A positive residual means that the model underestimates the travel time. (f) No, this calculation would require extrapolation. 8.25 (a) murder = −29.901 + 2.559 × poverty%. (b) Expected murder rate in metropolitan areas with no poverty is -29. 901 per million. This is obviously not a meaningful value, it just serves to adjust the height of the regression line. (c) For each additional percentage increase in poverty, we expect murders per million to be higher on average by 2.559. (d) Poverty level explains 70.52%
|
of the variability in 0.7052 = murder rates in metropolitan areas. (e) 0.8398. √ 8.27 (a) There is an outlier in the bottom right. Since it is far from the center of the data, it is a point with high leverage. It is also an influential point since, without that observation, the regression line would have a very different slope. (b) There is an outlier in the bottom right. Since it is far from the center of the data, it is a point with high leverage. However, it does not appear to be affecting the line much, so it is not an influential point. (c) The observation is in the center of the data (in the x-axis direction), so this point does not have high leverage. This means the point won’t have much effect on the slope of the line and so is not an influential point. 8.11 (a) There is a somewhat weak, positive, possibly linear relationship between the distance traveled and travel time. There is clustering near the lower left corner that we should take special note of. (b) Changing the units will not change the form, direction or strength of the relationship between the two variables. If longer distances measured in miles are associated with longer travel time measured in minutes, longer distances measured in kilometers will be associated with longer travel time measured in hours. (c) Changing units doesn’t affect correlation: r = 0.636. 8.13 (a) There is a moderate, positive, and linear relationship between shoulder girth and height. (b) Changing the units, even if just for one of the variables, will not change the form, direction or strength of the relationship between the two variables. 8.15 In each part, we can write the woman’s age as a linear function of the spouse’s age. (a) ageW = ageS + 3. (b) ageW = ageS − 2. (c) ageW = 2 × ageS. Since the slopes are positive and these are perfect linear relationships, the correlation will be exactly 1 in all three parts. An alternative way to gain insight into this solution is to create a mock data set, e.g. 5 women aged 26, 27, 28, 29, and 30, then �
|
�nd the spouses ages for each women in each part and create a scatterplot. 8.17 Correlation: no units. kg/cm. Intercept: kg. Slope: 8.19 Over-estimate. Since the residual is calculated as observed − predicted, a negative residual means that the predicted value is higher than the observed value. 8.21 (a) There is a positive, very strong, linear association between the number of tourists and spending. (b) Explanatory: number of tourists (in thousands). Response: spending (in millions of US dollars). (c) We can predict spending for a given number of tourists using a regression line. This may be useful information for determining how much the country may want to spend in advertising abroad, or to forecast expected revenues from tourism. (d) Even though the relationship appears linear in the scatterplot, the residual plot actually shows a nonlinear relationship. This is not a contradiction: residual plots can show divergences from linearity that can be difficult to see in a scatterplot. A simple linear model is inadequate for modeling these data. It is also important to consider that these data are observed sequentially, which means there may be a hidden structure not evident in the current plots but that is important to consider. 8.23 (a) First calculate the slope: b = r × sy/sx = 0.636 × 113/99 = 0.726. Next, make use of the fact that the regression line passes through the point 502 APPENDIX A. EXERCISE SOLUTIONS 8.29 (a) There is a negative, moderate-to-strong, somewhat linear relationship between percent of families who own their home and the percent of the population living in urban areas in 2010. There is one outlier: a state where 100% of the population is urban. The variability in the percent of homeownership also increases as we move from left to right in the plot. (b) The outlier is located in the bottom right corner, horizontally far from the center of the other points, so it is a point with high leverage. It is an influential point since excluding this point from the analysis would greatly affect the slope of the regression line. 8.31 (a) The relationship is positive, linear, and moderate. Due to the clear non-constant variance in the residuals, a linear model is not appropriate for modeling the relationship between hours worked and
|
income. (b) Neither are a particularly: For the logged model, the scatterplot and residual plot show more constant variance in the residuals. However, the scatterplot with the logged model looks to have a bit of curvature. (c) For each hour increase hours works we would expect the income to increase on average by a factor of e0.058 ≈ 1.06, i.e. by 6%. 8.33 (a) The relationship is positive, moderate-tostrong, and linear. There are a few outliers but no points that appear to be influential. (b) weight = −105.0113 + 1.0176 × height. Slope: For each additional centimeter in height, the model predicts the average weight to be 1.0176 additional kilograms (about 2.2 pounds). Intercept: People who are 0 centimeters tall are expected to weigh - 105.0113 kilograms. This is obviously not possible. Here, the y- intercept serves only to adjust the height of the line and is meaningless by itself. (c) H0: The true slope coefficient of height is zero (β = 0). HA: The true slope coefficient of height is different than zero (β = 0). The p-value for the two-sided alternative hypothesis (β = 0) is incredibly small, so we reject H0. The data provide convincing evidence that height and weight are positively correlated. The true slope parameter is indeed greater than 0. (d) R2 = 0.722 = 0.52. Approximately 52% of the variability in weight can be explained by the height of individuals. 8.35 (a) H0: β = 0. HA: β = 0. The p-value, as reported in the table, is incredibly small and is smaller than 0.05, so we reject H0. The data provide convincing evidence that women’s and spouses’ heights are positively correlated. (b) heightS = 43.5755 + 0.2863 × heightW. (c) Slope: For each additional inch in woman’s height, the spouse’s height is expected to be an additional 0.2863 inches, on average. Intercept: Women who are 0 inches tall are predicted to have spouses who are 43.5755 inches tall. The intercept here is meaningless, and it serves
|
only to adjust the height of the line. (d) The slope is positive, so r must also be positive. r = (e) 63.2612. Since R2 is low, the prediction based on this regression model is not very reliable. (f) No, we should avoid extrapolating. 0.09 = 0.30. √ 8.37 (a) H0 : β = 0; HA : β = 0 (b) The pvalue for this test is approximately 0, therefore we reject H0. The data provide convincing evidence that poverty percentage is a significant predictor of murder rate. 18 = 2.10; 2.559±2.10×0.390 = (1.74, 3.378); For each percentage point poverty is higher, murder rate is expected to be higher on average by 1.74 to 3.378 per million. (d) Yes, we rejected H0 and the confidence interval does not include 0. (c) n = 20, df = 18, T ∗ 8.39 (a) True. (b) False, correlation is a measure of the linear association between any two numerical variables. 8.41 There is an upwards trend. However, the variability is higher for higher calorie counts, and it looks like there might be two clusters of observations above and below the line on the right, so we should be cautious about fitting a linear model to these data. 8.43 (a) r = −0.72 → (2) (b) r = 0.07 → (4) (c) r = 0.86 → (1) (d) r = 0.99 → (3) 8.45 (a) There is a weak-to-moderate, positive, linear association between height and volume. There also appears to be some non-constant variance since the volume of trees is more variable for taller trees. (b) There is a very strong, positive association between diameter and volume. The relationship may include slight curvature. (c) Since the relationship is stronger between volume and diameter, using diameter would be preferred. However, as mentioned in part (b), the relationship between volume and diameter may not be, and so we may benefit from a model that properly accounts for nonlinearity. 503 Appendix B Data sets within the text Each data set within the text is described in this appendix.
|
For those data sets that are in multiple sections in a chapter, only the first section is listed in that chapter. If a data set is not listed here, e.g. Section 3.2.10 lists imagined probabilities for whether a parking garage will fill up and whether there is a sporting event that same evening for an unnamed college, it may not be listed in this data appendix. When a raw data set is available vs just a description, there is a corresponding page for the data set at openintro.org/data. That webpage also includes many more data sets than are covered in this textbook, and each data set on the website includes a description, it’s source, a detailed overview of each data set’s variables, and download options. Chapter 1: Data collection 1.1 stent30, stent365 → The stent data is split across two data sets, one for the 0-30 day and one for the 0-365 day results. Chimowitz MI, Lynn MJ, Derdeyn CP, et al. 2011. Stenting versus Aggressive Medical Therapy for Intracranial Arterial Stenosis. New England Journal of Medicine 365:993-1003. www.nejm.org/doi/full/10.1056/NEJMoa1105335. NY Times article: www.nytimes.com/2011/09/08/health/research/08stent.html. 1.2 loan50, loans full schema → This data comes from Lending Club (lendingclub.com), which provides a large set of data on the people who received loans through their platform. The data used in the textbook comes from a sample of the loans made in Q1 (Jan, Feb, March) 2018. 1.2 county, county complete → These data come from several government sources. For those variables included in the county data set, only the most recent data is reported, as of what was available in late 2018. Data prior to 2011 is all from census.gov, where the specific Quick Facts page providing the data is no longer available. The more recent data comes from USDA (ers.usda.gov), Bureau of Labor Statistics (bls.gov/lau), SAIPE (census.gov/did/www/saipe), and American Community Survey (census.gov/programs-surveys/acs). 1
|
.4 The study in mind regarding chocolate and heart attack patients: Janszky et al. 2009. Chocolate consumption and mortality following a first acute myocardial infarction: the Stockholm Heart Epidemiology Program. Journal of Internal Medicine 266:3, p248-257. 1.4 The Nurses’ Health Study was mentioned. For more information on this data set, see www.channing.harvard.edu/nhs 1.5 The study we had in mind during the introduction of Section 1.5.1 was Anturane Reinfarction Trial Research Group. 1980. Sulfinpyrazone in the prevention of sudden death after myocardial infarction. New England Journal of Medicine 302(5):250-256. 504 APPENDIX B. DATA SETS WITHIN THE TEXT Chapter 2: Summarizing data 2.1 county → This data set is described in the data for Chapter 1. 2.1 email50, email → These data represent emails sent to David Diez. Each data set includes 21 variables. The email50 data set is a random sample of 50 emails from email. 2.2 loan50, county → These data sets are described in the data for Chapter 1. email50, email → These data sets are described in the data for Section 2.1. 2.2 2019 mean and median income. → https://data.census.gov/cedsci/table?hidePreview=true&tid=ACSST1Y2019.S1901 2.2 possum → The brushtail possum statistics are based on a sample of possums from Australia and New Guinea. The original source of this data is as follows: Lindenmayer DB, et al. 1995. Morphological variation among columns of the mountain brushtail possum, Trichosurus caninus Ogilby (Phalangeridae: Marsupiala). Australian Journal of Zoology 43: 449-458. 2.3 SAT and ACT score distributions → The SAT score data comes from the 2018 distribution, which is provided at reports.collegeboard.org/pdf/2018-total-group-sat-suite-assessments-annual-report.pdf The ACT score data is available at act.org/content/dam/act/unsecured/documents/cccr2018/P 99 999999 N S N00 ACT-GCPR National.pdf We also
|
acknowledge that the actual ACT score distribution is not nearly normal. However, since the topic is very accessible, we decided to keep the context and examples. 2.3 nba players 19 → Summary information from the NBA players for the 2018-2019 season. Data were retrieved from www.nba.com/players. 2.4 loans full schema → This data set is described in the data for Chapter 1. 2.5 malaria → Lyke et al. 2017. PfSPZ vaccine induces strain-transcending T cells and durable protection against heterologous controlled human malaria infection. PNAS 114(10):2711-2716. www.pnas.org/content/114/10/2711 Chapter 3: Probability and probability distributions 3.1 email → This data set is described in the data for Chapter 2. 3.1 playing cards → A table describing the 52 cards in a standard deck. 3.2 Machine learning on fashion. → This is a simulated data set, not based on any specific machine learning classifier. 3.2 smallpox → Fenner F. 1988. Smallpox and Its Eradication (History of International Public Health, No. 6). Geneva: World Health Organization. ISBN 92-4-156110-6. 3.2 family college → A simulated data set based on real population summaries at nces.ed.gov/pubs2001/2001126.pdf. 3.2 Mammogram screening, probabilities. → The probabilities reported were obtained using studies reported at www.breastcancer.org and www.ncbi.nlm.nih.gov/pmc/articles/PMC1173421. 3.4 stocks 18 → Monthly returns for Caterpillar, Exxon Mobil Corp, and Google for November 2015 to October 2018. 3.5 Blood type prevalence. → The fraction of people with O+ blood is about 38% according to https://www.redcrossblood.org/donate-blood/blood-types/o-blood-type.html We used 35% for simplicity in the examples. 505 Chapter 4: Sampling distributions 4.1 Blood type prevalence. → This data set is described in the data for Chapter 3. 4.2 run17, run17samp → These data set represent the full population and a sample of the runners and their run times in the 2017 Cherry Blossom Run in Washington, DC. For more details, see www.cherry
|
blossom.org. 4.2 poker → The full data set includes poker winnings (and losses) for 50 days by a professional poker player, which represents their first 50 days trying to play for a living. Anonymity has been requested by the player. Chapter 5: Foundations for inference 5.1 email → This data set is described in the data for Chapter 2. 5.1 pew energy 2018 → The actual data has more observations than were referenced in this chapter. That is, we used a subsample since it helped smooth some of the examples to have a bit more variability. The pew energy 2018 data set represents the full data set for each of the different energy source questions, which covers solar, wind, offshore drilling, hydrolic fracturing, and nuclear energy. The statistics used to construct the data are from the following page: www.pewinternet.org/2018/05/14/majorities-see-government-efforts-to-protect-the-environmentas-insufficient/ 5.2 pew energy 2018 → See the details for this data set above in the Section 5.1 data section. 5.2 ebola survey → In New York City on October 23rd, 2014, a doctor who had recently been treating Ebola patients in Guinea went to the hospital with a slight fever and was subsequently diagnosed with Ebola. Soon thereafter, an NBC 4 New York/The Wall Street Journal/Marist Poll found that 82% of New Yorkers favored a “mandatory 21-day quarantine for anyone who has come in contact with an Ebola patient”. This poll included responses of 1,042 New York adults between Oct 26th and 28th, 2014. Poll ID NY141026 on maristpoll.marist.edu. 5.3 transplant → This is a made up data set about the health outcomes for a hypothetical medical consultant. Note that the data set on the website has 62 patients, not 142 patients, so there will a difference for what is covered in this book vs the data set on the website. 5.3 Alaska residents under 5 years old. → The 2010 statistic comes from the US census: https://data.census.gov. 506 APPENDIX B. DATA SETS WITHIN THE TEXT Chapter 6: Inference for categorical data 6.1 Supreme Court → The Gallup organization began measuring the public
|
’s view of the Supreme Court’s job performance in 2000, and has measured it every year since then with the question: “Do you approve or disapprove of the way the Supreme Court is handling its job?”. In 2018, the Gallup poll randomly sampled 1,033 adults in the U.S. and found that 53% of them approved. https://news.gallup.com/poll/237269/supreme-court-approval-highest-2009.aspx 6.1 Life on other planets → A February 2018 Marist Poll reported: “Many Americans (68%) think there is intelligent life on other planets”. This is up from 52% in 2005. The results were based on a random sample of 1,033 adults in the U.S. http://maristpoll.marist.edu/212-are-americans-poised-for-an-alien-invasion 6.1 Congressional approval rating. → This survey data is from news.gallup.com/poll/237176/snapshot-congressional-job-approval-july.aspx 6.1 Tire inspection. → This is a hypothetical scenario not based on real data. 6.1 Toohey poll. → This is a hypothetical scenario not based on a real person or real data. 6.1 Support for nuclear energy. → The results are from the following Gallup poll: www.gallup.com/poll/182180/support-nuclear-energy.aspx 6.2 cpr → B¨ottiger et al. Efficacy and safety of thrombolytic therapy after initially unsuccessful cardiopulmonary resuscitation: a prospective clinical trial. The Lancet, 2001. 6.2 gear company → This is a hypothetical scenario not based on real data. 6.2 healthcare law survey → Pew research survey on the Affordable Care Act (aka Obamacare) that ran the survey question with two variants. http://www.people-press.org/2012/03/26/public-remains-split-on-health-care-bill-opposedto-mandate/ 6.2 fish oil 18 → Manson JE, et al. 2018. Marine n-3 Fatty Acids and Prevention of Cardio- vascular Disease and Cancer. NEJMoa1811403. 6.3 jury → Simulated data set of registered voter proportions and representation on juries from a population
|
. 6.3 M&Ms → Rick Wicklin collected a sample of 712 candies, or about 1.5 pounds, and counted how many there were of each color. https://qz.com/918008/the-color-distribution-of-mms-as-determined-by-a-phd-in-statistics 6.4 gsearch → Simulated (fake) data set for Google search experiment. 6.4 ask → Experiment results from asking about iPods, where the original source is: Minson JA, Ruedy NE, Schweitzer ME. There is such a thing as a stupid question: Question disclosure in strategic communication. opim.wharton.upenn.edu/DPlab/papers/workingPapers/ Minson working Ask%20(the%20Right%20Way)%20and%20You%20Shall%20Receive.pdf 6.4 Obama and Congressional approval by political affiliation. → This survey was completed by Pew Research and the full results may be found at: http://www.people-press.org/2012/03/14/romney-leads-gop-contest-trails-in-matchup-with-obama 6.4 Attitudes on climate change → A Pew Research poll published in May of 2021 looks at how Americans’ attitudes about climate change differ by generation, party and other factors. https://www.pewresearch.org/fact-tank/2021/05/26/key-findings-how-americans-attitudes-about-climate-change-differ- by-generation-party-and-other-factors/ 507 Chapter 7: Inference for numerical data 7.1 Risso’s dolphins → Endo T and Haraguchi K. 2009. High mercury levels in hair samples from residents of Taiji, a Japanese whaling town. Marine Pollution Bulletin 60(5):743-747. Taiji was featured in the movie The Cove, and it is a significant source of dolphin and whale meat in Japan. Thousands of dolphins pass through the Taiji area annually, and we will assume these 19 dolphins represent a simple random sample from those dolphins. 7.1 Croaker white fish → www.fda.gov/food/food
|
borneillnesscontaminants/metals/ucm115644.htm 7.1 run17samp → This data set is described in the data for Chapter 4. 7.2 textbooks, ucla textbooks f18 → Data were collected by OpenIntro staff in 2010 and again in 2018. For the 2018 sample, we sampled 201 UCLA courses. Of those, 68 required books could be found on Amazon. The websites where information was retrieved: sa.ucla.edu/ro/public/soc, ucla.verbacompare.com, and amazon.com. 7.2 sat improve → This is a hypothetical (fake) data set for SAT improvement from an SAT preparation company. 7.3 Jennifer-John study. → Bertrand M, Mullainathan S. 2004. Science faculty’s subtle gender biases favor male students. PNAS October 9, 2012 109 (41) 16474-16479. https://www.pnas.org/content/109/41/16474 7.3 resume → Study for racial bias in hiring, where the study’s data is available in the resume data set. This data set is explored in great detail in the logistic regression section of the OpenIntro Statistics textbook (free PDF). The original source for this data is: Bertrand M, Mullainathan S. 2004. Are Emily and Greg More Employable than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination. The American Economic Review 94:4 (991-1013). www.nber.org/papers/w9873 7.3 Exams variants. → This is a simulated (fake) data set for exam performance of students for two different exam variations. 7.3 ncbirths → A random sample of 1000 NC births. A sample of that random sample was used for the example in the section. 7.3 stem cells → Menard C, et al. 2005. Transplantation of cardiac-committed mouse embryonic stem cells to infarcted sheep myocardium: a preclinical study. The Lancet: 366:9490, p10051012. https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(05)67380-1/fulltext 508 APPENDIX B. DATA SETS WITHIN THE TEXT Chapter 8: Introduction to linear regression 8.
|
1 simulated scatter → Fake data used for the first three plots. The perfect linear plot uses group 4 data, where group variable in the data set (Figure 8.1). The group of 3 imperfect linear plots use groups 1-3 (Figure 8.2). The sinusoidal curve uses group 5 data (Figure 8.3). The group of 3 scatterplots with residual plots use groups 6-8 (Figure 8.8). The correlation plots uses groups 9-19 data (Figures 8.9 and 8.10). 8.1 possum → This data is described in the data for Chapter 2. 8.1 simulated scatter → The plots for things that can go wrong uses groups 20-23 (Figure 8.26). 8.2 elmhurst → These data were sampled from a table of data for all freshman from the 2011 class at Elmhurst College that accompanied an article titled What Students Really Pay to Go to College published online by The Chronicle of Higher Education: chronicle.com/article/WhatStudents-Really-Pay-to-Go/131435. 8.2 textbooks, ucla textbooks f18 → This data is described in the data for Chapter 7. 8.2 loan50 → This data set is described in the data for Chapter 1. 8.2 mariokart → Auction data from Ebay (ebay.com) for the game Mario Kart for the Nintendo Wii. This data set was collected in early October, 2009. 8.2 simulated scatter → The plots for types of outliers uses groups 24-29 (Figure 8.19). 8.3 county, county complete → These data sets are described in the data for Chapter 1. 8.4 midterms house → Data was retrieved from Wikipedia. 509 Appendix C Distribution tables C.1 Random Number Table Row 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 1-5 44394 61578 18529 81238 11173 96737 63514 35087 00148 28999 37911 33624 93282 57429 65029 14779 52072 76282 46561 70623 03605 46147 09547 12899 21223 35770 04243 56989 53233 20232 6-10 76100 75037 73285 18321 58878 95194 55066 57036 73933 76232 50834 82379 63059
|
normal probability table may be used to find percentiles of a normal distribution using a Z-score, or vice-versa. Such a table lists Z-scores and the corresponding percentiles. An abbreviated probability table is provided in Figure C.1 that we’ll use for the examples in this appendix. A full table may be found on page 512. 0.00 0.01 0.02 Second decimal place of Z 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.5000 0.5398 0.5793 0.6179 0.6554 0.6915 0.7257 0.7580 0.7881 0.8159 0.8413 0.8643... 0.5040 0.5438 0.5832 0.6217 0.6591 0.6950 0.7291 0.7611 0.7910 0.8186 0.8438 0.8665... 0.5080 0.5478 0.5871 0.6255 0.6628 0.6985 0.7324 0.7642 0.7939 0.8212 0.8461 0.8686... 0.5120 0.5517 0.5910 0.6293 0.6664 0.7019 0.7357 0.7673 0.7967 0.8238 0.8485 0.8708... 0.5160 0.5557 0.5948 0.6331 0.6700 0.7054 0.7389 0.7704 0.7995 0.8264 0.8508 0.8729... 0.5199 0.5596 0.5987 0.6368 0.6736 0.7088 0.7422 0.7734 0.8023 0.8289 0.8531 0.8749... 0.5239 0.5636 0.6026 0.6406 0.6772 0.7123 0.7454 0.7764 0.8051 0.8315 0.8554 0.8770... 0.5279 0.5675 0.6064 0.6443 0.6808 0.7157 0.7486 0.7794 0.8078 0.8340 0.8577 0.8790... 0.5319 0.
|
5714 0.6103 0.6480 0.6844 0.7190 0.7517 0.7823 0.8106 0.8365 0.8599 0.8810... 0.5359 0.5753 0.6141 0.6517 0.6879 0.7224 0.7549 0.7852 0.8133 0.8389 0.8621 0.8830... Z 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1... Figure C.1: A section of the normal probability table. The percentile for a normal random variable with Z = 1.00 has been highlighted, and the percentile closest to 0.8000 has also been highlighted. When using a normal probability table to find a percentile for Z (rounded to two decimals), identify the proper row in the normal probability table up through the first decimal, and then determine the column representing the second decimal value. The intersection of this row and column is the percentile of the observation. For instance, the percentile of Z = 0.45 is shown in row 0.4 and column 0.05 in Figure C.1: 0.6736, or the 67.36th percentile. Figure C.2: The area to the left of Z represents the percentile of the observation. negative Zpositive Z EXAMPLE C.1 SAT scores follow a normal distribution, N (1100, 200). Ann earned a score of 1300 on her SAT with a corresponding Z-score of Z = 1. She would like to know what percentile she falls in among all SAT test-takers. Ann’s percentile is the percentage of people who earned a lower SAT score than her. We shade the area representing those individuals in the following graph: 511 The total area under the normal curve is always equal to 1, and the proportion of people who scored below Ann on the SAT is equal to the area shaded in the graph. We find this area by looking in row 1.0 and column 0.00 in the normal probability table: 0.8413. In other words, Ann is in the 84th percentile of SAT takers. EXAMPLE C.2 How do we find an upper tail area? The normal probability table always gives the area to the left. This means that if
|
we want the area to the right, we first find the lower tail and then subtract it from 1. For instance, 84.13% of SAT takers scored below Ann, which means 15.87% of test takers scored higher than Ann. We can also find the Z-score associated with a percentile. For example, to identify Z for the 80th percentile, we look for the value closest to 0.8000 in the middle portion of the table: 0.7995. We determine the Z-score for the 80th percentile by combining the row and column Z values: 0.84. EXAMPLE C.3 Find the SAT score for the 80th percentile. We look for the are to the value in the table closest to 0.8000. The closest value is 0.7995, which corresponds to Z = 0.84, where 0.8 comes from the row value and 0.04 comes from the column value. Next, we set up the equation for the Z-score and the unknown value x as follows, and then we solve for x: Z = 0.84 = x − 1100 200 → x = 1268 The College Board scales scores to increments of 10, so the 80th percentile is 1270. (Reporting 1268 would have been perfectly okay for our purposes.) For additional details about working with the normal distribution and the normal probability table, see Section 2.3, which starts on page 101. −3−2−10123 512 APPENDIX C. DISTRIBUTION TABLES 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0.00 Z Second decimal place of Z 0.0002 0.0003 0.0005 0.0007 0.0010 0.0014 0.0019 0.0026 0.0036 0.0048 0.0064 0.0084 0.0110 0.0143 0.0183 0.0233 0.0294 0.0367 0.0455 0.0559 0.0681 0.0823 0.0985 0.1170 0.1379 0.0003 0.0004 0.0005 0.0007 0.0010 0.0014 0.0020 0.0027 0.0037 0.0049 0.0066 0.0087 0.0113 0.01
|
46 0.0188 0.0239 0.0301 0.0375 0.0465 0.0571 0.0694 0.0838 0.1003 0.1190 0.1401 0.0003 0.0004 0.0005 0.0008 0.0011 0.0015 0.0021 0.0028 0.0038 0.0051 0.0068 0.0089 0.0116 0.0150 0.0192 0.0244 0.0307 0.0384 0.0475 0.0582 0.0708 0.0853 0.1020 0.1210 0.1423 0.0003 0.0004 0.0006 0.0008 0.0011 0.0015 0.0021 0.0029 0.0039 0.0052 0.0069 0.0091 0.0119 0.0154 0.0197 0.0250 0.0314 0.0392 0.0485 0.0594 0.0721 0.0869 0.1038 0.1230 0.1446 0.0003 0.0004 0.0006 0.0008 0.0011 0.0016 0.0022 0.0030 0.0040 0.0054 0.0071 0.0094 0.0122 0.0158 0.0202 0.0256 0.0322 0.0401 0.0495 0.0606 0.0735 0.0885 0.1056 0.1251 0.1469 0.0003 0.0004 0.0006 0.0008 0.0012 0.0016 0.0023 0.0031 0.0041 0.0055 0.0073 0.0096 0.0125 0.0162 0.0207 0.0262 0.0329 0.0409 0.0505 0.0618 0.0749 0.0901 0.1075 0.1271 0.1492 0.0003 0.0004 0.0006 0.0009 0.0012 0.0017 0.0023 0.0032 0.0043 0.0057 0.0075 0.0099 0.0129 0.0166 0.0212 0.0268 0.
|
0336 0.0418 0.0516 0.0630 0.0764 0.0918 0.1093 0.1292 0.1515 0.0003 0.0005 0.0006 0.0009 0.0013 0.0018 0.0024 0.0033 0.0044 0.0059 0.0078 0.0102 0.0132 0.0170 0.0217 0.0274 0.0344 0.0427 0.0526 0.0643 0.0778 0.0934 0.1112 0.1314 0.1539 0.1611 0.1788 0.1867 0.2061 0.2148 0.2358 0.2451 0.2676 0.2776 0.3015 0.3121 0.3372 0.3483 0.3745 0.3859 0.4129 0.4247 0.4522 0.4920 0.4641 ∗For Z ≤ −3.50, the probability is less than or equal to 0.0002. 0.1635 0.1894 0.2177 0.2483 0.2810 0.3156 0.3520 0.3897 0.4286 0.4681 0.1762 0.2033 0.2327 0.2643 0.2981 0.3336 0.3707 0.4090 0.4483 0.4880 0.1736 0.2005 0.2296 0.2611 0.2946 0.3300 0.3669 0.4052 0.4443 0.4840 0.1711 0.1977 0.2266 0.2578 0.2912 0.3264 0.3632 0.4013 0.4404 0.4801 0.1660 0.1922 0.2206 0.2514 0.2843 0.3192 0.3557 0.3936 0.4325 0.4721 0.1685 0.1949 0.2236 0.2546 0.2877 0.3228 0.3594 0.3974 0.4364 0.4761 0.0003 0.0005 0.0007 0.0009 0.0013 0.0018 0.0025 0.0034 0.0045 0
|
.0060 0.0080 0.0104 0.0136 0.0174 0.0222 0.0281 0.0351 0.0436 0.0537 0.0655 0.0793 0.0951 0.1131 0.1335 0.1562 0.1814 0.2090 0.2389 0.2709 0.3050 0.3409 0.3783 0.4168 0.4562 0.4960 0.0003 −3.4 0.0005 −3.3 0.0007 −3.2 0.0010 −3.1 0.0013 −3.0 0.0019 −2.9 0.0026 −2.8 0.0035 −2.7 0.0047 −2.6 0.0062 −2.5 0.0082 −2.4 0.0107 −2.3 0.0139 −2.2 0.0179 −2.1 0.0228 −2.0 0.0287 −1.9 0.0359 −1.8 0.0446 −1.7 0.0548 −1.6 0.0668 −1.5 0.0808 −1.4 0.0968 −1.3 0.1151 −1.2 0.1357 −1.1 0.1587 −1.0 0.1841 −0.9 0.2119 −0.8 0.2420 −0.7 0.2743 −0.6 0.3085 −0.5 0.3446 −0.4 0.3821 −0.3 0.4207 −0.2 0.4602 −0.1 0.5000 −0.0 Negative Z 513 Second decimal place of Z 0.07 0.5279 0.5675 0.6064 0.6443 0.6808 0.7157 0.7486 0.7794 0.8078 0.8340 0.04 0.03 0.06 0.05 0.00 0.01 0.02 0.5080 0.5478 0.5871 0.6255 0.6628 0.6985 0.7324 0.7642 0.7939 0.8212 0.5120 0.5517 0.5910 0.6293 0
|
.6664 0.7019 0.7357 0.7673 0.7967 0.8238 0.5000 0.5398 0.5793 0.6179 0.6554 0.6915 0.7257 0.7580 0.7881 0.8159 0.5199 0.5596 0.5987 0.6368 0.6736 0.7088 0.7422 0.7734 0.8023 0.8289 0.5239 0.5636 0.6026 0.6406 0.6772 0.7123 0.7454 0.7764 0.8051 0.8315 0.5040 0.5438 0.5832 0.6217 0.6591 0.6950 0.7291 0.7611 0.7910 0.8186 0.5160 0.5557 0.5948 0.6331 0.6700 0.7054 0.7389 0.7704 0.7995 0.8264 0.8485 0.8708 0.8907 0.9082 0.9236 0.9370 0.9484 0.9582 0.9664 0.9732 0.8461 0.8686 0.8888 0.9066 0.9222 0.9357 0.9474 0.9573 0.9656 0.9726 0.8413 0.8643 0.8849 0.9032 0.9192 0.9332 0.9452 0.9554 0.9641 0.9713 0.8438 0.8665 0.8869 0.9049 0.9207 0.9345 0.9463 0.9564 0.9649 0.9719 Z 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 0.9988 3.1 0.9991 3.2 0.9994 3.3 0.9996 3.4 0.9997 �
|
�For Z ≥ 3.50, the probability is greater than or equal to 0.9998. 0.8508 0.8729 0.8925 0.9099 0.9251 0.9382 0.9495 0.9591 0.9671 0.9738 0.8531 0.8749 0.8944 0.9115 0.9265 0.9394 0.9505 0.9599 0.9678 0.9744 0.8554 0.8770 0.8962 0.9131 0.9279 0.9406 0.9515 0.9608 0.9686 0.9750 0.9793 0.9838 0.9875 0.9904 0.9927 0.9945 0.9959 0.9969 0.9977 0.9984 0.9778 0.9826 0.9864 0.9896 0.9920 0.9940 0.9955 0.9966 0.9975 0.9982 0.9803 0.9846 0.9881 0.9909 0.9931 0.9948 0.9961 0.9971 0.9979 0.9985 0.9772 0.9821 0.9861 0.9893 0.9918 0.9938 0.9953 0.9965 0.9974 0.9981 0.9798 0.9842 0.9878 0.9906 0.9929 0.9946 0.9960 0.9970 0.9978 0.9984 0.9783 0.9830 0.9868 0.9898 0.9922 0.9941 0.9956 0.9967 0.9976 0.9982 0.9788 0.9834 0.9871 0.9901 0.9925 0.9943 0.9957 0.9968 0.9977 0.9983 0.9988 0.9992 0.9994 0.9996 0.9997 0.9987 0.9991 0.9993 0.9995 0.9997 0.9989 0.9992 0.9994 0.9996 0.9997 0.9989 0.9992 0.9994 0.9996 0.9997 0.9987 0.9990 0
|
.9993 0.9995 0.9997 0.9987 0.9991 0.9994 0.9995 0.9997 0.8577 0.8790 0.8980 0.9147 0.9292 0.9418 0.9525 0.9616 0.9693 0.9756 0.9808 0.9850 0.9884 0.9911 0.9932 0.9949 0.9962 0.9972 0.9979 0.9985 0.9989 0.9992 0.9995 0.9996 0.9997 0.08 0.09 0.5319 0.5714 0.6103 0.6480 0.6844 0.7190 0.7517 0.7823 0.8106 0.8365 0.8599 0.8810 0.8997 0.9162 0.9306 0.9429 0.9535 0.9625 0.9699 0.9761 0.9812 0.9854 0.9887 0.9913 0.9934 0.9951 0.9963 0.9973 0.9980 0.9986 0.9990 0.9993 0.9995 0.9996 0.9997 0.5359 0.5753 0.6141 0.6517 0.6879 0.7224 0.7549 0.7852 0.8133 0.8389 0.8621 0.8830 0.9015 0.9177 0.9319 0.9441 0.9545 0.9633 0.9706 0.9767 0.9817 0.9857 0.9890 0.9916 0.9936 0.9952 0.9964 0.9974 0.9981 0.9986 0.9990 0.9993 0.9995 0.9997 0.9998 YPositive Z 514 APPENDIX C. DISTRIBUTION TABLES C.3 ttt-Probability Table A ttt-probability table may be used to find tail areas of a t-distribution using a T-score, or viceversa. Such a table lists T-scores and the corresponding percentiles. A partial ttt-table is shown in Figure C.3, and the complete
|
table starts on page 516. Each row in the t-table represents a t-distribution with different degrees of freedom. The columns correspond to tail probabilities. For instance, if we know we are working with the t-distribution with df = 18, we can examine row 18, which is highlighted in Figure C.3. If we want the value in this row that identifies the T-score (cutoff) for an upper tail of 10%, we can look in the column where one tail is 0.100. This cutoff is 1.33. If we had wanted the cutoff for the lower 10%, we would use -1.33. Just like the normal distribution, all t-distributions are symmetric. one tail two tails 1 df 2 3... 17 18 19 20... 400 500 ∞ 0.100 0.200 3.08 1.89 1.64... 1.33 1.33 1.33 1.33... 1.28 1.28 1.28 0.050 0.100 6.31 2.92 2.35... 1.74 1.73 1.73 1.72... 1.65 1.65 1.64 0.025 0.050 12.71 4.30 3.18... 2.11 2.10 2.09 2.09... 1.97 1.96 1.96 0.010 0.020 31.82 6.96 4.54... 2.57 2.55 2.54 2.53... 2.34 2.33 2.33 0.005 0.010 63.66 9.92 5.84 2.90 2.88 2.86 2.85 2.59 2.59 2.58 Figure C.3: An abbreviated look at the t-table. Each row represents a different t-distribution. The columns describe the cutoffs for specific tail areas. The row with df = 18 has been highlighted. EXAMPLE C.4 What proportion of the t-distribution with 18 degrees of freedom falls below -2.10? Just like a normal probability problem, we first draw the picture and shade the area below -2.10: To find this area, we first identify the appropriate row: df = 18. Then we
|
identify the column containing the absolute value of -2.10; it is the third column. Because we are looking for just one tail, we examine the top line of the table, which shows that a one tail area for a value in the third row corresponds to 0.025. That is, 2.5% of the distribution falls below -2.10. In the next example we encounter a case where the exact T-score is not listed in the table. −4−2024 515 EXAMPLE C.5 A t-distribution with 20 degrees of freedom is shown in the left panel of Figure C.4. Estimate the proportion of the distribution falling above 1.65. We identify the row in the t-table using the degrees of freedom: df = 20. Then we look for 1.65; it is not listed. It falls between the first and second columns. Since these values bound 1.65, their tail areas will bound the tail area corresponding to 1.65. We identify the one tail area of the first and second columns, 0.050 and 0.10, and we conclude that between 5% and 10% of the distribution is more than 1.65 standard deviations above the mean. If we like, we can identify the precise area using statistical software: 0.0573. Figure C.4: Left: The t-distribution with 20 degrees of freedom, with the area above 1.65 shaded. Right: The t-distribution with 475 degrees of freedom, with the area further than 2 units from 0 shaded. EXAMPLE C.6 A t-distribution with 475 degrees of freedom is shown in the right panel of Figure C.4. Estimate the proportion of the distribution falling more than 2 units from the mean (above or below). As before, first identify the appropriate row: df = 475. This row does not exist! When this happens, we use the next smaller row, which in this case is df = 400. Next, find the columns that capture 2.00; because 1.97 < 3 < 2.34, we use the third and fourth columns. Finally, we find bounds for the tail areas by looking at the two tail values: 0.02 and 0.05. We use the two tail values because we are looking for two symmetric tails in the t-distribution. GUIDED PRACT
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.