Prix bas
CHF191.20
Impression sur demande - l'exemplaire sera recherché pour vous.
Nature didn't design human beings to be statisticians, and in fact our minds are more naturally attuned to spotting the saber-toothed tiger than seeing the jungle he springs from. Yet scienti?c discovery in practice is often more jungle than tiger. Those of us who devote our scienti?c lives to the deep and satisfying subject of statistical inference usually do so in the face of a certain under-appreciation from the public, and also (though less so these days) from the wider scienti?c world. With this in mind, it feels very nice to be over-appreciated for a while, even at the expense of weathering a 70th birthday. (Are we certain that some terrible chronological error hasn't been made?) Carl Morris and Rob Tibshirani, the two colleagues I've worked most closely with, both ?t my ideal pro?le of the statistician as a mathematical scientist working seamlessly across wide areas of theory and application. They seem to have chosen the papers here in the same catholic spirit, and then cajoled an all-star cast of statistical savants to comment on them.
Collects the most important papers of Bradley Efron in one place Adds comments by Efron and other important statisticians
Contenu
Foreword by Bradley Efron.-1. From 1965: The convex hull of a random set of points, Introduced by Tom Cover.- 2. From 1971: Forcing a sequential experiment to be balanced, Introduced by Herman Chernoff.-3. From 1975: Defining the curvature of a statistical problem (with applications to second order efficiency) Introduced by Rob Kass and Paul Vos.- 4. From 1975: Data analysis using Stein's estimator and its generalizations (with Carl Morris), Introduced by John Rolph.- 5. From 1976: Estimating the number of unseen species: How many words did Shakespeare know? (with Ronald Thisted), Introduced by Peter McCullagh.- 6. From 1977: The efficiency of Cox's likelihood function for censored data, Introduced by John Kalbfleisch.- 7. From 1977: Stein's paradox in statistics (with Carl Morris), Introduced by Jim Berger.- 8. From 1978: Assessing the accuracy of the maximum likelihood estimator: Observed versus expected Fisher information (with David V. Hinkley), Introduced by Thomas DiCiccio.- 9. From 1979: Bootstrap methods: Another look at the jackknife, Introduced by David Hinkley .- 10. From 1981: The jackknife estimate of variance (with Charles Stein), Introduced by Jun Shao and C.F. Jeff Wu.- 11. From 1982: The Jackknife, the Bootstrap and Other Resampling Plans [excerpt], Introduced by Peter Hall .- 12. From 1983: Estimating the error rate of a prediction rule: Improvement on cross-validation, Introduced by Trevor Hastie.- 13. From 1986: Why isn't everyone a Bayesian?, Introduced by Larry Wasserman.- 14. From 1987: Better bootstrap confidence intervals,Introduced by Peter Bickel.- 15. From 1993: An Introduction to the Bootstrap (with Robert Tibshirani) [excerpt], Introduced by Rudy Beran.- 16. From 1996: Using specially designed exponential families for density estimation (with Robert Tibshirani), Introduced by Nancy Reid.- 17. From 1996: Bootstrap confidence levels for phylogenetic trees (correction) (with Elizabeth Halloran and Susan Holmes), Introduced byJoe Felsenstein.- 18. From 1998: R. A. Fisher in the 21st century, Introduced by Stephen Stigler.- 19. From 2001: Empirical Bayes analysis of a microarray experiment (with Robert Tibshirani, John D. Storey and Virginia Tusher), Introduced by Rafael Irizarry .- 20. From 2004: Least angle regression (with Trevor Hastie, Iain Johnstone and Robert Tibshirani), Introduced by David Madigan.- 21. From 2004: Large-scale simultaneous hypothesis testing: The choice of a null hypothesis, Introduced by Michael Newton.- President's Corner by Bradley Efron (AMSTAT News, April 2004): But What Do Statisticians Do?.