Favorate Book
This
book includes modern theories for asymptotic
analysis (or large sample analysis), which will be
useful for mathematical statisticians. Empirical process
techniques
are used as a main tool to prove theorems. I found
that functional
delta method (Sec. 20), Z-/M-estimators (Sec. 5), U-statistics (Sec.
12), influence function (Sec. 20) are very helpful for studying asymptotic behavior of
statistics commonly used in non- and semi-parametric inference, especially in survival analysis. The book
also feature many classical topics such as Bahadure efficiency, LeCam's lemmas, etc. Overall, many theorems are difficult to prove. This
may be because the theorems are given for minimal (weak) regularity
conditions, which leads to technically long proofs. I found the section on semiparametric
models (Sec.
25) not read well due to the highly abstract (and partly unclear) explanations.
Suitable
level: at least Master in Statistics or Mathematics. More suitable for Ph.D. or Professors in Statistics.
This book describes the theory of empirical
processes and thier weak convergence. The book comprises empirical
process theories (Chapter 1 and 2) and statistical applications (Chapter
3). As a statistician, the most usuful is Chapter 2, in which M-estimator,
Z-estimator, Fuctional delta method, etc. are described. Especially, Section
3.3 (Z-estimator) present a very general theory of
Z-estimators, by which I find useful in studying the
nonparametric maximum likelihood estimator (NPMLE).
The theorems and their proofs are
usually difficult to understand. The main difficulty comes from the
generality of the treatment. For example, Z-estimators treated in this
book takes its value in Banach space (usual Z-estimators take their
valus in Euclidian space in semiparametric or parametric models). Hence,
readers need some abstract thinking to understand the mathematical
objects. Nevertheless, this generality of the treatment makes the theory of
Z-estimator very powerful and applicable, especially for studying the NPMLE.
From theoretical point of view, the book carefully
treat the mesurability problems. It is surprising that empirical procecess
are not Borel mesurable function (p.3). To relax the measureability issues
for empirical procecess, the book adopt the Hoffmann-Jorgensen type
arguments to drop the masureability requirements. This is interesting.
Suitable level: at least Master in Statistics or Mathematics. More suitable for Ph.D. or Professors in Statistics. Very difficult and challenging book.
This
book summarizes wide range of topics in mathematical statistics. No real data analysis is included. I
had a chance to read this book to teach a required course for Ph.D
students. The major reason that I like this book is the right coverage
of materials. They includes important classical results that are
supposed to be learn for students who major in Statistics.
Chapter 1 is compact but very dense introduction to propability thoery, including the measure theoretic definitions of Radon-Nikodym derivative and conditional expectation. This measure-theoretic treatments of the probability have at leaset two purposes: 1) unify the discrete and continuous distributions into a single framework ; 2) define advanced probability concepts (conditional expectation, martingales and Markov chain). If the readers are familier with probability theory, it seems okay to skip Chapter 1 and start from Chapter 2 (Fundamentals of Statistics).
Chapter
2
provides a decision theoretic framework that include point estimation,
interval estimation, testing, etc. It also includes fundamental
concepts that are useful to evaluate the performance of estimators,
including admissibility, minimaxity, consistency, etc. This chapter
also covers shrinkage estimators in simultaneous estimation. This
section seems to be the most important and interesting part of the book.
Chapter
5 study hypothesis testing. I found this section the most difficult to
read since theories of UMP, UMPU, UMPI are quite abstract and the
proofs are less clear. This difficult may partly due to the less
organized arrangements of the theorems: there are too many equation
numbers in theorems and lemmas; there are too many statement in
theorems. For examle, Lemma 6.7 contains so many equation numbers that
I almost felt annoying to follow.
Sometimes,
the description of the book become so general and abstract that I felt
difficulty in teaching. For example, the descritpions for the LSE and
BLUE focus too much on the identifiability conditions of the
parameters, without little real example. In addition, some derivations
of the formulas are not clearly written which makes it difficulty to
follow. For example,
A
number of excercises listed at the end of each chapter is useful for
teaching and homework assignment (but students can easily find answers from the internet).
The
book nicely explains multivariate
analysis based on very clear, step-by-step presentation, using lots of
matrix
algebras. Many data examples are also used to illustrate the
formulas and theories in this book. The author emphasize
the geometrical interpretation of the results.
Geometrical interpretation is well-explained in
general, but it
is sometimes not too easy to understand if the reader is not
familier
with linear algebla. Although the title of the book includes "Applied",
the contents of this book seems to be more interesting for
"theoretical" or "mathematical" statisticians, especially they like
to interpret the result geometrically. Since the description of
the linear algebra is so nice, the book can be a useful reference for linear algebra.
Suitable level: People who finish linear
algebra.
One of the standard textbooks for survival
analysis. This book characterizes survival analysis
as techniques for handling "incomplete" data such as
right-censored data. This style is rather different from standard
textbooks in which survival analysis is introduced as a technique for
handling right-censored lifetime data only. In fact, most textbooks
on survival analysis do not treat left-truncation, which is one of the
important areas in survival analysis. Therefore,
the book becomes a particularly useful reference for
thoese who are interested in analysing left-censored data,
left-truncated data, right-truncated data, inteval censored data and competing
risks data. The methods are explained with lots of real data examples from
medical research.
One
of the benchmark text book for reliability and survival analysis . I
do not read all chapters of this book, but the Chap 2 (Failure Time
Models) and Chap.3 (Inference in Parametric Models) are useful for
studying reliability (I publish one paper in Technometrics by studying
this book). Especially, the book gives consice description of
"industrial life testing" using the motorettes examples, a well-known
example in the
industrial life-testing context. The so-called Weibull regression
implemented in "survreg" routine in R follows the formulation of this
book. Overall, the book is useful "reference book" for writing academic
papers, as it includes modern topics in a wide range (e.g., Chap10,
Analysis of Correlated Failure
Time Data).
In
addition, Chap. 8 (Competing risks and multistate models) includes the
authentic overview of cause-specific and cumulative incidence analysis
for
competing risks data. The well-know identifiability issue for competing
risks data is also given. The description is good enough. However,
there are better books, e.g. Classical Competing Risks by M. Crowder
(2001), to study competing risks.
In general, there are many equations whose derivations are
unclear. Hence, students may requare energy and time to understand the
book without instructors.