NPSOR91030
NAVAL POSTGRADUATE SCHOOL
Monterey, California
A KALMAN FILTER FOR A POISSON SERIES
WITH COVARIATES AND LAPLACE
APPROXIMATION INTEGRATION
Donald P. Gaver
//
Patricia A. Jacobs
September 1991
Approved for public release; distribution is unlimited.
Prepared for:
Naval Postgraduate School
Monterey, CA 93943
FedDocs
D 208.1^/2
NPSOR9 1030
FedLtSOL*
01. tfU
fPPART
oc*.
NAVAL POSTGRADUATE SCHOOL,
MONTEREY, CALIFORNIA
Rear Admiral R. W. West, Jr. Harrison Shull
Superintendent Provost
This report was prepared in conjunction with research funded under the Naval
Postgraduate School Research Council Research Program.
This report was prepared by:
UNCLASSIFIED
HAVAL POSTGRADUATE SCH
SOWT2KEY, CALIFORNIA «3a4a»,Xi,; I
SECURITY CLASSIFICATION OF THIS PAGE.
DUDLEY KNOX LIBRARY
NAVAL POSTGRADUATE SCHOOL
MONTEREY CA 939435101
REPORT DOCUMENTATrON PAGE
1a. REPORT SECURITY CLASSIFICATION
UNCLASSIFIED
Form Approved
OMBNo. 07040188
1b. RESTRICTIVE MARKINGS
2a. SECURITY CLASSIFICATION AUTHORITY
2b. DECLASSIFICATION/DOWNGRADING SCHEDULE
3. DISTRIBUTION /AVAILABILITY OF REPORT
Approved for public release; distribution is
unlimited.
4. PERFORMING ORGANIZATION REPORT NUMBER(S)
NPSOR91030
5. MONITORING ORGANIZATION REPORT NUMBER(S)
. NAME OF PERFORMING ORGANIZATION
Naval Postgraduate School
6b. OFFICE SYMBOL
(If applicable)
OR
7a. NAME OF MONITORING ORGANIZATION
O&MN, Direct Funding
6c. ADDRESS {City, State, and ZIP Code)
Monterey, CA 93943
7b. ADDRESS ( City, State, and ZIP Code)
Monterey, CA 939435006
8a. NAME OF FUNDING/SPONSORING
ORGANIZATION
Naval Postgraduate School
8b. OFFICE SYMBOL
(II applicable)
9. PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER
8c. ADDRESS ( City, State, and ZIP Code)
Monterey, CA 93943
10. SOURCE OF FUNDING NUMBERS
PROGRAM
ELEMENT NO.
PROJECT
NO.
TASK
NO.
WORK UNIT
ACCESSION NO.
1 1 . TITLE (Include Security Classification)
A Kalman Filter for a Poisson Series with Covariates and Laplace Approximation Integration
12. PERSONAL AUTHOR(S)
Donald P. Gaver and Patricia A. Jacobs
13a. TYPE OF REPORT
Technical Report
13b. TIME COVERED
FROM _TO
14. DATE OF REPORT ( Year, month day)
September, 1991
15. PAGE COUNT
47
16. SUPPLEMENTARY NOTATION
17.
COSATI CODES
FIELD
GROUP
SUBGROUP
18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number)
Poisson time series with covariates; hierarchical model; Kalman
filter; Laplace method
19. ABSTRACT (Continue on reverse if necessary and identify by block number)
A hierarchical model for a Poisson time series is introduced. The model allows the mean or
rate of the Poisson variables to vary slowly in time; it is modeled as the exponential of an AR/1
process. In addition the rate is influenced by a covariate. The Laplace method is used to
recursively update some model parameter estimates. Frankly heuristic methods are explored to
estimate other of the underlying parameters. The methodology is checked against simulated
data with encouraging results.
20. DISTRIBUTION/AVAILABILITY OF ABSTRACT
\*\ UNCLASSIFIED/UNLIMITED [ ] SAMEASRPT. ] DTIC USERS
2 1 . ABSTRACT SECURITY CLASSICIATION
UNCLASSIFIED
22a. NAME OF RESPONSIBLE INDIVIDUAL
D. P. Gaver
22b. TELEPHONE (Include Area Code)
(408) 6462605
2c. OFFICE SYMBOL
OR/Gv
DD Form 1473, JUN 86
Previous editions are obsolete.
SECURITY CLASSIFICATION OF THIS PAGE
A KALMAN FILTER FOR A POISSON SERIES WITH COVARIATES
AND LAPLACEAPPROXIMATION INTEGRATION
D. P. Gaver
P. A. Jacobs
0. INTRODUCTION
The Poisson model is an initial idealized, but plausible, offtheshelf tool
for representing pointprocess data of nearly any kind, cf. Feller (1966) and
Cox and Lewis (1968). However, to be more descriptive, and even predictive,
representation of nonhomogeneity in space or time may be needed. For
instance the occurrence of rare events in space, such as the occurrence of
extreme heights, i.e. above a level of Arctic ice along a transsect or encounters
with ice keels along a submarine track at constant (deep) depths, may well
appear roughly Poisson. A better description of these events may require
more detail than a simple mean or rate: some account of regional and
seasonal variation could be needed for true accuracy; for instance the
intervention of natural gaps along a path in Arctic ice will occur if the latter
crosses leads (open water amidst an ice pack). For another example, demands
for spare parts in a logistics system often are roughly Poisson, or compound
Poisson, but with a mean or rate that changes erratically but slowly in time.
Demands for communication and computer facilities exhibit a similar
temporal pattern, and there are a great many other examples. To summarize,
variation in a fundamental Poisson rate or mean is very often encountered in
practice; it is possible that if this variation is slowmoving or persistent
enough it can be exploited for shortterm forecasting.
In this paper we study a modelbased procedure for forecasting in the
kind of environments described. The model introduced allows the mean or
rate of the Poisson process to be itself a random process; the exponential of an
AR/1 autoregressive process. In addition the rate is influenced by a
covariate. We then recursively update the parameter estimates using an
approximation based on the Laplace method; cf. de Bruijn (1958). The approach
resembles that of Delampady, Yee and Zidek (1991), but frankly heuristic
methods are used to estimate certain of the underlying parameters. The
methodology is checked against simulated data with encouraging results.
1. FORMULATION
Consider the following Poissonian model for count data:
P{Y, =ytkt,xt,frh t ,y ,...,y t _ 1 } = exp{h t e x * + > i ty — l — (1.1)
Vt •
where • >
\h =a \Lti + ®t (12)
with {©,} independent normal/Gaussian random variables with mean and
variance W t independent of {u.,; i < f1}, {Y,; i < t1), {x t }, {h t } and p. Another
hierarchical time series model for count data can be found in Harvey et al.
(1989).
The purpose of this paper is to suggest a Kalman filterlike procedure to
produce successive estimates of P and \i t as new data becomes available. The
procedure is based on a Laplace approximation to an integral. A Bayesian
approach to a similar problem is being investigated by Delampaday et al.
(1991). A Bayesian approach to time series can be found in West et al. (1989)
and for a more recent computational approach in Carlin et al. (1991).
2. AN APPROXIMATE UPDATING PROCEDURE
Assume that the posterior distribution of (p, ji fl ) given {y„ i< t1) is
bivariate normal with mean (b t _ v m^), Var[fV] = t t _ v Var[p. fl ] = C t1 and
Corr[p,u M ] = p t _ r
Since it is known that
the prior distribution of (P,jif) is bivariate normal with mean {b t _y am,^),
Var[p] = x t _ v Var[ji t ] = R t = a C M + W t and Corr[p,ji f ] = r t = ap f _ 1 y]C t i/R t .
The forecast/prediction distribution of Y f in terms of data up to t1 and
the covariate value at t is
P{\ t = y f Y, = y it i*?)
ft yf^lj*
(2.3)
a*
■#(M = ^7*1
(*& + z), _L._I
i  r Rf
^(w;(y/>^^K"r[iVr
"(*" «f ) ^o
^i v^TiV^
(2.4)
(2.5)
d&'
■^(Me^WM 1 ^]
21' 1 1
V1
(2.6)
a&az
g (M)= ^^Mt+Mi^l'VfA)
05
(2.7)
Use a Newton procedure to solve the system of equations
ofi
dz
a&*
(2.8)
for mf and bt Solve for t,, Q and p t using the relation
It Pt^t^t
ptfith c t
[a 2
db 2 *
a 2
_dbdz g
m tM
mtM
dbdz
g
dz'
g
m t M
mtM
(2.9)
The posterior distribution for (P,if) given y t , D t \ is approximated by a
bivariate normal distribution with mean (bt, nit) and variancecovariance
matrix
T f Pt^ c th
PtJCt*t c t
*t
(2.10)
2.1 Summary of the Newton Procedure
0. Start with estimates of the parameters of the prior bivariate distribution
of (P,jif); that is, estimates of the mean (b t \, amti), Corr(P,u. t ) = r t , Var(p) = r t i
Var[u. ( ] = R t . Set b t  b M , m t = cun iv
1. Solve the system of linear equations (in b and m)
2 2
d /,o n* . d /,o n\r, ,.m . d
„0
 TA">' m >) + ^(""■ m ")l b  ""1 + hm m >)l m ■ t/^.i/5>?i (3.4)
a/ i i ^ ,2 t i
fife «Mfl) TiT7=0 (35)
aw 2w z Pi 2W
7
VV(T) = £(/x, a(T) Mf .!) 2 . (3.6)
1 t=l
Unfortunately n f isnot observable. One possible estimate of \i t is the
posterior mean m t of subsections (2.1) and (2.2). In this case the corresponding
estimators of a and W are
T T
«m(^) = I^^l/I^ 2 l (37)
1 T 2
W m (T)=^(m t a m (T)m t . 1 ) Z . (3.8)
J f=l
Another estimate of p., is
My)  in
1
Vt + 
x t fy
(3.9)
where b t is the mean of the approximate posterior distribution of {$ given
{y s ; s < t}. The corresponding estimates of a and W are
T T
and
ayCn X(&(y)&i(y))/X/**(y)'
T
^(T)4lfe(y)«y(^i(y))'
(3.10)
(3.11)
The estimates (3.7)  (3.11) can be recomputed at every time T. Other
similar estimates can be obtained by not recomputing the estimates at every
time T; for example, choose an integer 8 > 1, put
a(t;S) = a{{nl)S;8) (3.12)
and
if (n 1)5 < f 1 and trying to estimate \i t to be a negative number
large in absolute value. This tendency caused the numerical Newton
procedure discussed in Section 2 to become unstable, and rendered the
predicted value of X t very slow to respond to positive counts when they did
eventually occur. This behavior was ameliorated by arbitrarily putting a
lower bound of 2 on fl r A run of zero counts could also make the estimate of
W become close to zero. A small value of W makes the Kalman filter
unresponsive to count changes. This behavior was mitigated by recomputing
11
estimates of a and W every 5 = 10 time units rather than at every time, and
using all previous times t as in (3.7)(3.8). The estimates of a and W were also
not computed if the last 10 observed counts were zero. Further a lower
bound of 0.1 was arbitrarily set on estimates of W . Such heuristics are not
claimed to be optimal, but are needed to allow the procedures to behave
suitably. A search for more systematic procedures can be carried out in
future work.
The following are empirical observations. The estimates of a and W using
p.{t) = m t tends to produce practically acceptable estimates of a but
underestimates W — it tends to make it very small. This behavior is not
unexpected since m ( can be thought of as a smoothed estimate of the \i t and so
will have a smaller variance than \i r This underestimation of VV is not
11
Neither does using "windows" of length 8
improved by using fi t  In
Vt+2
seem to solve the problem. Other procedures for estimating a and W are
explored in the following two sections.
4. ESTIMATION OF a AND W FOR THE POISSON/NORMAL MODEL:
THE FREEMANTUKEY TRANSFORMATION
This section reports another possible approach to the estimation of the
autoregressive parameters for a Poisson/normal model.
The model is as before
H f+1 = a\i t +co f+1 (4.1)
where {©,} are iid normal with mean and variance W;
\e»t*frt h A y
P{% =y^,x f }=exp{^ + P x '/2^ — i . (4.2)
y '
12
For simplicity of notation we will assume h t = 1.
If W and a are known, then an approximate Kalman procedure has been
given previously. The issue here is the estimation of a and W.
Let
g{x)=\[x' + yfx~Tl (4.3)
and
Z, = gOQ. (4.4)
The conditional distribution of Z t given \i t is approximately normal with
r x ,05
mean /:(p. ( + x ( P), where A:(x) = 4£ + 1J , and variance 1; c.f. Freeman and
Tukey (1949, 1950) and Bishop, Fienberg and Holland (1975). Thus if {V f+1 } are
iid standard normal, given D t = (Y , Y v ..., Y f )
d
Z*+l"*(M*+l + *t+lP) +v *+l
= k{a\Lt + (& t+1 + x t+ iP) + V f +1
= fc(aw% + a(\L t m t ) + (o t +1 + x, +1 fy + x f+ i(P  b t )) + V, +1
* fc(am t + *f +1 fy)+fc'(am f + x t+1 b t )[a(\i t m t )+x t+1 $ b t ) + (O t+1 \ +V f+1 . (4.5)
Hence,
E[Z, +1 D,] = /;(am, + x, +1 &,) , (4.6)
Var[Z, +1 D,] = [fc'(am ( + x f+1 fc,)) 2 [a 2 C f + xf +1 T t + 2ax t+1 p t ^C t J^t + W] +1. (4.7)
An approximate joint distribution of the transformed observations is
13
P{Zi £dzi,Z 2 edz 2 ,...,Z T+l £dz T+ i}
= P{Z l ed2 1 }P{Z 2 £dz 2 \Z l = z 1 }*... *P{Z T+1 Gd2 T+1 Z! =z 1 ,...,Z T = z T }
= P{Y 1 = dy 1 }P{Z 2 ed2 2 Y 1 =y 1 }x...x P {z T+1 e^^ = y v ... ,Y T = y T }
 p&i = yi}Il ^=77 TAM o5 ex p4[ 2 '+i ' k{amt + ^ +lfcf) ' 2 //*( a ' w > I < 4  8 >
t «i J2nf t (a, W) I ^ J
where
/ f (a,W) =[a 2 Q + x? +1 T f +2ap^ +1> /Q 1 /r7 + w][/c'(am f + Xj+l^)] +L
Hence an approximate Inlikelihood function is (up to addition of constants),
T
l{a,W) = £ ln/£(a,W) (z^j  k{am t + x t+l b t )) 2 f t (a,W) m \ (4.9)
t=\ 2 2
Note that
r r l 05
ifc(x) = [4e x + l] ;
05 r 2e
X
Jfc'(x) =[4e x +ll 4e* = , ;
2 L J viTTT
2e*[2e*+l]
[4e* + 1]
k"(x) = ~—[^ ( 4 !0)
Further,
14
a 9
f t (a,W) = [k'{am t + x t+1 b t )\
dW
(4.11)
s 9
— /,(a,W) = [2aQ + 2x t+1 p t JqJ^][k'(am t +x t+1 b t )]
+[a 2 C t +x} +1 T t + 2a Pt x t+1 JqJ^ + W]2[k'{am t +x t+1 b t )]
*k"(am t + x f+1 fr f )m f .
(4.12)
Differentiating (4.9) with respect to a and W we obtain
a
dl  l^a
f t (a,w)
— = y 
aa t fi 2 /t(«/W)
1
i(«,VV)
(4.13)
1
+(z t+l k(am t +x t+1 b t ))k'{am t + x t+1 b t )m t f t {a,W)'
dl
dW
1 1
"£" 2/ f (a,W)aW
/ f (a,W) + [z,+i  fc(am ( + *, +1 &,)] &*
f t (a,wy
(4.14)
= V i —f t (a,W)
£ 2/ f (a,W)d.W
1 
[2 f+1 <:(affl t + x t+ ib t )\
f t {a,W)
da 2
T
El
a
da
/ f (a,W)
[k'(am t + x t+1 b t )m t \
ft(a,W)
(4.15)
d 2 W
T 1
dW
A(«/W)
/ f (a,W)
(4.16)
15
■ a 2 / "
dadW _
T 1
aw
f( a,W)^/,(a,W)
aa
/,(a,W) 2
(4.17)
To avoid difficulties with a Newton procedure involving the restricted
range of W > 0, we will reparameterize; W = e. In this case
a/ a/ v
— = e 7
dy dW
Pr 2 ]
= E
1l
dW 7
,2y
\ d 2 l 1
dady
= E
a2/ 1
and W is replaced by e in all the expressions (4.12)  (4.17).
A Kalman procedure with this method of estimating a and W is similar to
that of Section 3.2 except that Step 1 is replaced by the numerical solution of
the system of equations,
da = °
dy = °
(4.18)
(4.19)
by a Newton procedure using Fisher's scoring method. Set W = e r where y is
the solution. Occasionally, there are numerical problems with the Newton
procedure. In these cases search is used to find estimates of a and W.
A summary of the Newton (with default search) procedure is as follows.
The procedure starts with the previously estimated a and /called a and y Q .
0. Set B = 0.
16
1. Compute \f(S/,ea) , \f(a/,ay) , £ \b\bc\[(\£(e 2 !,da 2 )), £
r 4
dy
dy
■ and E [dady using a o and ?V If
< O.OOOl go to 2', otherwise go to 2.
'£1
_dady_
< 0.001 or E
2. Solve the system of linear equations (in terms a and y)
6a
da 2
(a a ) + E
eV
dady
(r y )
67
dady
a  a ) + E
dy 2
(r  r )
for a n and y n . Go to 3.
r)/
2'. Set y n = y Q . If x~ is of the same sign for both the current and previous
value of a, set
a/
a n a Q —l
da 2
■at
and go to 3. If ^t is of different signs for the current and previous
values for a, then a golden section search is used to solve the equation
da
set a n equal to the first value of a for which
dl
da
< 0.1; if the number of
iterations for this onedimensional search is greater than 50, go to 4.
3. If max
values
dl
da
(<*J,
dl
da
(r»)
< 0.1 stop and take a n and y n as the estimated
3'. If I a n I < 5 and y n < 5, go to 5.
17
3". If  a J > 5 or y n > 5, let B = B+l. If B = 1, go to 4. If B = 2, go to 4'. If
B > 3, to to 4".
4. Do a twodimensional search of the lnlikelihood function (4.9)
parameterized in terms of a and W for a maximizing value. The grid
for a is [4,4] in steps of 0.2; the grid for W is [0,8] in steps of 0.2. If
max
dl
da
(««)/
dl
dy
(InW)
<0.1
return a n and y n = InW as the estimated values; otherwise set a = a n
and y = InW and return to 1.
4'. Same as 4 except the grid for a is [8,8] in steps of 0.1 and the grid for W
is [0.1,8] in steps of 0.1 with the additional points 0.000001, 0.00001,
0.0001, 0.001, 0.01.
4". y is set equal to its previous value and a golden section search for the
r dl ■
zero of t~ is done as in 2 .
5. Set a o  ct n and y = y n and return to 1 if the number of iterations is less
than 50. If the number of iterations is greater than 50 set B = B + 1. If
B = 1, go to 4. If B > 6, return a n and y n as the estimated values. If
1 < B < 5, go to 6.
dl
and

ey 17 "'
• If 1
do^K)
< 
~(y)
go to 6';
6. Compute ^(«„)
otherwise go to 6".
6'. Fix a n  a Q and do a search over the interval [4B, 4B] for that yfor
which
$1
dy'
dl
< 0.1; if the number of
Set y n equal to the first value of y for which L^
iterations is greater than 50, go to 4.
6". Fix y n = y and do a search over the interval [4B, 4B] for that a for
dl
which = jr~ ; set a n equal to the first value of a for which
if the number is greater than 50, go to 4.
dl
da
<0.1;
18
The following are empirical observations. The procedure involves a great
deal of computing. For small times t, the estimate of W can be very close to
zero; a lower bound of 0.1 was placed on the value of W that is input into the
Kalman procedure. Further, a lower bound of 2 was also put on p... The
stopping criteria of max
dl
da
—
' dy
< 0.1 seems to be adequate; a smaller
tolerance can result in the procedure becoming unstable and greatly
increasing the computational effort.
5. ESTIMATION OF a AND W FOR THE POISSON/NORMAL MODEL:
LOGARITHMIC TRANSFORMATION
Results of Lambert (1989) suggest that when count data arise from a
mixture of Poisson distributions, then a smaller power transformation of the
counts than the square root is needed to stabilize the variance of the count
data. In particular, if the variability of the mixture is largish, then a log
transformation may stabilize the variance. In this section simple procedures
for estimating a and W based on a log transformation of the count data are
described.
Suppose \ t and \i t are as in (1.1) and (1.2). Given \i t+v (J, the first two terms
of a Taylor expansion yield
lnY, +1 * lnfc, +1 + % +1 p + ji f +1 + [h t exp{x t+1 $ + n t+1 }] ' [Y,  k t exp{x t+1 $ + ^ +1 }]
(5.1)
Let
Z, +1 =ln
Vl + "
 lnh t+1  x t +i$.
(5.2)
Using the first term of the Taylor expansion (5.1)
19
z f+l * m+l = a \k + ©t+l " am t + m t+l (5.3)
Hence, given [Yq, Yj, ..., Y f ], the distribution of Z /+1 has approximate mean am,
and variance W. We will approximate the distribution of Z t+1 with a normal
distribution having mean am t and variance W. A lnlikelihood function
under this approximation is (up to addition of constants)
T 1
l(a,W) = £ ±lnJV .i(2fc +1  am,) 2 !; (5.4)
tl 2 2 w
a/ T  1 1
7 = E ( 2 ' + i " am * K T77 = 0; ( 5>5)
ai' I; 1 n i ,2 i
= V + (z t+1  am t ) — T = 0. (5.6
ew £i 2VV 2 V ' * f VV 2
Thus,
ri
X (Zf+1%)
«Lm=^TTi (57)
5> 2
ti
1 T  ! 2
W L (T)= —  £ (z f+1  d L (T)m f ) (5.8)
are the resulting estimates of a and W.
Simulation experiments suggest that d L and W^ tend to have a large
positive bias.
Another possibility is to estimate a by
Tl Tl
«m( T ) = 7 E ™t +1™, / I m ? ( 5 ' 9 )
1 tl tl
as in (3.7) of Section 3 and W by
20
w
1 T  1
^ T ) = 777 I ( 2 *+l  OmCD^) • ( 5  1Q )
m
These estimates are easy to compute. Simulation experiments suggest that
this estimate of a m tends to have a negative bias and W m L has a positive bias
but not as large as that of VV^
6. RESULTS FROM SIMULATION EXPERIMENTS
In this section we report results of simulation experiments concerning the
behavior of the approximate Kalman filter of Section 2 under various
procedures for estimating a and W.
Each replication of the simulation consists of generating a Poisson time
series {y t ; t 0,1, 2, ..., T} using equations (1.1)  (1.2). The random numbers
were generated using LLRANDOM II, cf. Lewis et al. (1981). If a < 1, then jig
is drawn from the stationary distribution of {\i t ; t > 0}. The Kalman procedure
with each of the procedures for estimating a and W is then applied to
{y{ t = 0,1, 2, ..., T}. For each time t, the estimate
X t = exp\h t exp{{L t + xj}} (6.1)
is computed and the mean square prediction error
T " 1 .2
1 ' L t=i
is calculated. In addition the average estimated values of a and W and /3 are
computed:
1 T
m(a) = £a, (6.3)
1 *=l
21
1 T
m(W )=£w, (6.4)
1 t=l
1 T
m(/3) = J> (6.5)
where d t and VV f are the estimates of a and W obtained after the observation at
time t and b t is as in Section 1.
The simulation is replicated N times. If e^m^a), m ,(W), m (/J) represent
the summary statistics from the * replication, then the mean of the summary
statistics are computed for the N replications; i.e.
1 N
e= — j^ep (6.5)
N
I
N
I 1 N 2
a =ij;m / (a); (6.7)
N i=\
 1 N
w "77l>i(W); ( 6  8)
N i=\
 1 N
/?=5>,(/3). (6.8)
These latter statistics are then compared for the different procedures of
estimating a and W.
Results for the following procedures for estimating a and W are reported.
1. DD: a and W are both known and are not estimated.
22
2. FT: Both a and W are estimated by a twodimensional search of the ln
likelihood based on the FreemanTukey transformation (4.9). The grid
for W is [0.1,3] in increments of 0.1; the grid for a is [3,3] in increments
of 0.1.
3. M/FT: a is estimated by the moment estimator using (3.7);W is
estimated by searching the onedimensional likelihood based on the
FreemanTukey transformation with & set equal to (3.7); the grid is
[0.1,3] with steps of size 0.1.
4. M/LN: a is estimated using the moment estimate of (3.7). Wis
estimated using the moment estimator (5.10) based on the logarithmic
transformation.
5. LN/LN: a and W are estimated using the moment estimators (5.9) and
(5.10) based on the logarithmic transformation.
All the procedures have a lower bound of 2 on fi t , a lower bound of 0.1
on VVp and an upper bound of 1 on the absolute value of & t .
For the results of the simulation experiments reported in Table 1, a = 0.5,
W = 0.25, x t *l,h t *l,fi = 0.5. The initial values of the estimates VV = 0.25, & 6 =
0.5, fi = 0; for the Kalman C = 1, t = 1, /5 = 0, $ = 0. The twodimensional
search for the estimates of a and W in FT takes a large amount of computer
time. As a result, the number of replications for the experiments reported in
Table 1 is small. However, the results of Table 1 suggest that the two most
promising procedures are M/LN and M/FT.
In Table 2, the number of replications is 100. The procedures M/LN and
M/FT only are compared. In Table 2, the initial values of VV = 0.25, & = 0.5,
fi = 0; for the Kalman C = 1, % = 1, fa = 0, fi = 0. .
In Table 3, {x t } = (0.25, 0.5, 1, 0.25, 0.5, 1, ...}, j3 = 0.5, W = 0.25, a = 0.5. The
initial values of VV = 0.25, & = 0.5, fi Q = 0; for the Kalman C = 1, t = 1, /5 = 0,
= 0.
In Table 4, the parameters are the same as in Table 3 except W = 1. The
initial values of VV = 0.25, & = 0.5, fi  0; for the Kalman C = 1, f = 1, /5 = 0,
= 0.
23
Comparison of Tables 2 and 3 suggests, not surprisingly, that the mean
square prediction error is smaller when there is a variable covariate {*,}.
Comparison of Tables 3 and 4 suggests, not surprisingly, that a larger
value of W results in a larger mean square prediction error.
24
TABLE 1.
SERIES LENGTH = 5; 20 REPLICATIONS
METHOD:
DD
FT
M/FT
M/LN
LN/LN
Mean MSE
4.34
4.91
4.77
4.19
4.96
St. Dev. MSE
3.21
3.82
3.64
3.17
3.76
Max MSE
11.0
13.2
11.7
12.5
13.5
Mean 6c

0.10
0.11
0.15
0.26
Mean W
0.48
0.67
0.98
0.77
SERIES LENGTH = 10; 20 REPLICATIONS
METHOD:
DD
FT
M/FT
M/LN
LN/LN
Mean MSE
2.94
3.15
3.08
2.77
3.36
St. Dev. MSE
2.27
2.42
2.28
1.93
2.54
Max MSE
10:4
11.1
10.1
8.27
11.2
Mean 6c

0.12
0.08
0.09
0.19
Mean VV
0.23
0.32
0.78
0.69
SERIES LENGTH = 20; 40 REPLICATIONS
METHOD:
DD
FT
M/FT
M/LN
LN/LN
Mean MSE
3.38
3.90
3.72
3.29
3.97
St. Dev. MSE
2.00
2.81
2.43
2.16
2.62
Max MSE
10.7
13.5
12.1
10.3
11.2
Mean 6c

0.33
0.15
0.17
0.31
Mean VV
0.22
0.31
0.79
0.73
25
TABLE 2. x t szl,a = 0.5, W = 0.25
SERIES LENGTH = 5; 100 REPLICATIONS
METHOD:
DD
M/FT
M/LN
Mean MSE
4.21
4.75
4.32
St. Dev. MSE
4.68
5.81
5.40
Max MSE
28.7
38.0
34.1
Mean &

0.08
0.09
Mean VV

0.55
0.93
Mean /3
0.34
0.25
0.23
SERIES LENGTH = 10; 100 REPLICATIONS
METHOD:
DD
M/FT
M/LN
Mean MSE
3.98
4.41
4.14
St. Dev. MSE
3.27
3.87
3.58
Max MSE
16.4
19.0
17.3
Mean &

0.07
0.10
Mean VV
.•
0.45
0.88
Mean /3
0.42
0.36
0.31
SERIES LENGTH = 20; 100 REPLICATIONS
METHOD:
DD
M/FT
M/LN
Mean MSE
3.96
4.35
4.18
St. Dev. MSE
2.59
3.30
3.06
Max MSE
20.3
26.5
23.6
Mean &

0.10
0.13
Mean VV

0.41
0.82
Mean (i
0.48
0.43
0.37
26
TABLE 3. VARIABLE {*,}, a = 0.5, W = 0.25
SERIES LENGTH = 5;
100 REPLICATIONS
METHOD:
DD
M/FT
M/LN
Mean MSE
3.69
3.89
3.70
St. Dev. MSE
3.83
4.09
4.00
Max MSE
15.6
18.4
19.9
Mean &

0.13
0.12
Mean VV

0.52
0.80
Mean J3
0.32
0.21
0.18
METHOD:
Mean MSE
St. Dev. MSE
Max MSE
Mean &
Mean VV
Mean /3
SERIES LENGTH = 20; 100 REPLICATIONS
DD M/FT M/LN
3.11 3.32 3.21
2.48 2.80 2.68
20.4 22.7 20.2
0.12 0.10
,.  0.35 0.78
0.41 0.36 0.28
27
TABLE 4. VARIABLE {*,}, a = 0.5, W = 1
SERIES LENGTH = 5; 100 REPLICATIONS
METHOD:
DD
M/FT
M/LN
Mean MSE
19.21
20.70
19.88
St. Dev. MSE
52.2
53.0
52.1
Max MSE
415.1
390.1
393.5
Mean &

0.19
0.18
Mean VV

0.93
1.05
Mean (5
0.31
0.14
0.12
SERIES LENGTH = 20; 100 REPLICATIONS
METHOD:
DD
M/FT
M/LN
Mean MSE
18.93
21.6
20.4
St. Dev. MSE
28.6
31.5
30.9
Max MSE
181.7
192.1
188.5
Mean &

0.20
0.19
Mean VV

1.2
1.1
Mean /3
0.55
0.32
0.31
28
Of the two procedures, M/LN tends to have the smaller mean MSE.
M/LN tends to have a positive bias estimating W and negative biases
estimating & and /3. The procedure M/FT takes more computational effort,
tends to underestimate W, a and /? and produces slightly higher mean MSE.
Tables 57 report results comparing procedures estimating a and W with
the procedure NFT in which both a and W are estimated using the Newton
procedure of Section 4. For all the procedures reported in these tables, there
is no bounding on &. In Table 5, x t ■ 1, h t ■ 1, f$ = 0.5, a = 0.5, W = 0.25. In
Table 6, x t = 0.25, 0.5, 1, 0.25, 0.5, 1, ..., and the other parameters are as before.
For Tables 5 and 6 the initial values of VV = 0.25, d = 0.5, fi = 0, C = 1, f = 1,
p Q = 0. The values in parentheses are estimates of standard deviations; e.g.
1
1 ", . . ,2~
—Yintila) a)'
For Table 7, {r f } is as In Table 6, a = 0.5, W = 0.25, f$ = 0.5, and the initial
estimates are & = 0, VV = 1, C = 1, t = 1, p Q = 0, /2 = 0.
The Newton procedure of Section 4 does not take as much time as the
twodimensional search used in Tables 12; however, it is still a much larger
computational effort than the other two procedures. Once again, based on the
mean MSE and computational effort the procedure M/LN appears to be the
most attractive. The procedure M/LN once again appears to overestimate W
and underestimate a and /3.
29
TABLE 5. x t s 1; a = 0.5, W = 0.25, = 0.5
SERIES LENGTH = 5, 25 REPLICATIONS
METHOD:
DD
NFT
M/FT
M/LN
Mean MSE
3.94
4.44
4.32
3.89
Std Dev. MSE
3.08
3.66
3.49
3.10
Max MSE .
11.0
13.2
11.7
12.5
Mean &

0.06 (0.37)
0.08 (0.22)
0.10 (0.23)
Mean VV

0.46 (0.47)
0.60 (0.60)
0.93 (0.45)
Mean $
0.28
0.18
0.21
0.18
SERIES LENGTH = 20, 25 REPLICATIONS
METHOD:
DD
NFT
M/FT
M/LN
Mean MSE
3.11
3.40
3.32
3.14
Std Dev. MSE
1.83
2.12
2.11
2.03
Max MSE
8.50
9.29
9.60
8.78
Mean a

0.22 (0.39)
0.10 (0.30)
0.12 (0.29)
Mean W

0.29 (0.24)
0.32 (0.26)
0.80 (0.23)
Mean J3
0.38
0.23
0.31
0.23
30
TABLE 6. VARIABLE {*,} ; a = 0.5, W = 0.25, = 0.5
SERIES LENGTH = 20, 25 REPLICATIONS
METHOD:
DD
NFT
M/FT
M/LN
Mean MSE
2.63
2.77
2.81
2.63
Std Dev. MSE
1.78
1.95
2.06
1.79
Max MSE
9.24
10.10
10.86
9.12
Mean &

0.21 (0.38)
0.06 (0.32)
0.06 (0.34)
Mean VV

0.20 (0.14)
0.25 (0.20)
0.68 (0.18)
Mean /}
0.33
0.21
0.27
0.18
31
TABLE 7. VARIABLE {*,} ; a = 0.5, W = 0.25, = 0.5
d = 0,W = l / C =l, i = l,,5 = 0, 00 =
SERIES LENGTH = 20, 100 REPLICATIONS
METHOD:
DD
NFT
M/FT
M/LN
Mean MSE
2.95
3.25
3.21
3.04
Std Dev. MSE
2.21
2.70
2.67
2.28
Max MSE
13.2
16.5
16.5
13.8
Mean &

0.12 (0.34)
0.11 (0.28)
0.09 (0.29)
Mean W

0.34 (0.44)
0.36 (0.32)
0.73 (0.19)
Mean $
0.36
0.25
0.28
0.21
32
7. SUMMARY AND CO VCLUSIONS
In this paper we have investigated approximate methods for estimation
and prediction for the hierarchical Poisson/normal time series model given
in(l.l)(1.2). For given values of the random walk parameters, a and W, the
joint distribution of (P, u f ) is approximated by a bivariate normal distribution
using the Laplace method. Various heuristic methods for estimating a and W
are presented. Based on simulation results and ease of computation, it is
recommended that a be estimated by (3.7)
T T
a(T) = Y,m t m tl/Y, m ?i
and W be estimated by (5.10)
1 T J, ^ ,2
wm= — £(z, +1 a(T)m,r
where z t+1 is given by (5.2).
These estimates of a and W along with the approximate Kalman
procedure using the Laplace method provide a computationally easy
procedure for prediction of the time series.
Figures 47 show results of a simulation of model (1.1)(1.2) for
1 = 1,.,., 100. In the example /3 = 0.5, {x t } takes the values {0.25, 0.5, 1} over and
over, and h t = 1. {(o t } are iid normal with mean 0, variance 0.25, and a = 0.5.
The simulation starts with \Iq drawn from the stationary distribution of {\L t }, a
2 1
normal distribution with mean and variance W/(la ) = ~. The values of a
and W are estimated using (3.7) and (5.10). Initial values of a and W are
33
a = 0.5 and VV = 1, C = 1, t = 1, p = 0, /3 = 0, /x = 0. There are no bounds on
fl t , a and VV .
Figure 4 displays the count data. Also displayed are the true X t (circles),
the estimated X t when both a and VV are known (dashed line) and the
estimated X t when both a and VV are estimated (solid line). The X t when both a
and VV are estimated is perhaps a little more responsive to changes in the data
than when a and VV are known. The difference between the two estimates of X t
is greatest for small times t.
Figure 5 presents plots of the estimated Vw and a. Note that VV has a
positive bias which will make the Kalman procedure more responsive to the
count data. The estimated values of a appear to be reasonable.
Figure 6 presents estimates of j3 and its estimated standard deviation yfc
both for the Kalman with a and VV known (dotted line) and with a and VV
unknown (solid line). The estimates of T f generally decrease as t increases
reflecting the model assumption that /3 is constant. The estimates of fi appear
to be reasonable. The estimates of /3 with a and VV also estimated tend to be
smaller than those for which a and VV are known; this behavior may be due to
the fact that estimation of a and VV is accounting for some of the variability of
the data that would otherwise be accounted for by /3.
Figure 7 displays the true \i t (circles), the estimated ]i t = m t with
parameters a and W known (dotted line) and the estimated fi t with parameters
a and W estimated (solid line). The count data are also displayed. The
estimates of fi t with a and W estimated are more variable than those when a
and W are known reflecting the greater responsiveness of the Kalman to the
34
data when the estimated W has a position bias. Once again the estimates of p t
appear to be reasonable.
35
REFERENCES
Bishop, Y.M.M., Feinberg, S. E. and Holland, P. W., Discrete Multivariate
Analysis: Theory and Practice, MIT Press, Cambridge, MA, 1975.
Carlin, B. P., Poison, N. G. and Stoffer, D. S., "A Monte Carlo approach to
nonnormal and nonlinear state space modeling," To appear/. Amer. Statist.
Assoc.
Cox, D. R. and Hinkley, D. W., Theoretical Statistics, Chapman and Hall, 1974.
Cox, D. R. and Lewis, P. A. W., The Statistical Analysis of Series of Events,
Methuen, London, 1968.
de Bruijn, N. G., Asymptotic Methods in Analysis, Interscience, New York, 1958.
Delampady, M., Yee, I. and Zidek, J. V., "Bayesian approach to a problem in
discrete time series" (abstract) The Institute of Mathematical Statistics BulletinlO,
(1991), p. 308.
Easton, G. S., "Location compromise maximum likelihood estimators" in
Configural Polysampling, ed. S. Morgenthaler and J. W. Tukey, Wiley, New
York, 1991, pp. 157192.
Feller, W., An Introduction to Probability Theory and its Applications, Vol. II,
Wiley, New York, 1966.
Freeman, M.F. and Tukey, J. W., "Transformations related to the angular and
the square root," Ann. Math. Stat., 21 (1950), pp. 607611.
Freeman, M.F. and Tukey, J. W., "Transformations related to the angular and
the square root." Memorandum Report 24, Statistical Research Group,
Princeton, 1949.
Harvey, A. C. and Fernandes, C, "Time series models for count or qualitative
observations, (with discussion), /. Business and Econ. Statist., 7, (1989), pp. 407
422.
Lambert, D. "Taking roots of count data." AT&T Statistical Research Report, No.
88, November 1989.
Lewis, P. A. W., and Uribe, L., The new Naval Postgraduate School random
number generator — LLRANDOM II, Naval Postgraduate School Technical
Report NPS 5581005, Monterey, CA, 1981.
36
West, M. and Harrison, J., Bayesian Forecasting and Dynamic Models, Springer
Verlag, New York, 1989.
37
<
<
Q
2E
CD
2
3
3
I—
00
Ld
Q
II
UJ
Q
f—
1
<
O
>
00
00
Ld
5
<
LJ
n
_J
CD
>
II
<
i—
_i
o
Q
Ld
<
I—
i
<
O
Q
(—
hi
z
h
_>
<
o
( >
^
h
if)
LU
viva
o
CO
ii
a:
<
Q_
CO
UJ
Ld
Q
Ld
t=
O
Q
II
§ <
O °
O 7
3
CO
LU
3
LJ
Z)
\
O
O
o
oo
o
CO
^ is
o to
o •—
Pi
o
o
CM
_d O
V z
vaawvi
41
in
LU
I—
<
m
UJ
cr
Ld
f—
UJ
<
<
Q_
UJ
ID
<
in
UJ
u_
o
o
if)
<
Q_
O
o
_
1

3:

Q
UJ
<
2

h
CO
LJ
O
.

t—
O
o
a:
SQUARE

i i i i i
i i
o
o
o
00
o
to
<
X
n.
_j
h
<
z
n
z>
h 1
o
i—
o
<
:§
i—
in
o
Id
■<*•
o
O
CO
o
(O
3 3
o So
o
o
CM
3"L 80 fO
M 1S3 lOOd OS
>"0 Z'O 2"0 *"0
VHdlV 1S3
J o
42
01
s
go to o
VI39 1S3
90 ro
g  *nvi iS3
43
o
o
LJ
Z)
(Z
I—
Q
Z
<
<
i—
h
<
[/)
HJ
li.l
h
z;
h
ID
O
o
z
o
00
C£
<
Q_
2
O
O
J L
o
oo
o
O
o
o
O
CN
viva
Jo
44
INITIAL DISTRIBUTION LIST
1. Library (Code 0142) 2
Naval Postgraduate School
Monterey, CA 939435000
2. Defense Technical Information Center 2
Cameron Station
Alexandria, VA 22314
3. Office of Research Administration (Code 012) 1
Naval Postgraduate School
Monterey, CA 939435000
4. Prof. Peter Purdue 1
Code ORPd
Naval Postgraduate School
Monterey, CA 939435000
5. Department of Operations Research (Code 55) 1
Naval Postgraduate School
Monterey, CA 939435000
6. Prof. Donald Gaver, Code ORGv 15
Naval Postgraduate School
Monterey, CA 939435000
7. Prof. Patricia Jacobs 15
Code OR/Jc
Naval Postgraduate School
Monterey, CA 939435000
8. Center for Naval Analyses 1
4401 Ford Avenue
Alexandria, VA 223020268
Dr. David Brillinger
Statistics Department
University of California
Berkeley, CA 94720
10. Dr. R. Gnanadesikan
Bellcore
445 South Street
Morris Township NJ 079621910
11. Prof. Bernard Harris
Dept. of Statistics
University of Wisconsin
610 Walnut Street
Madison, WI 53706
12. Prof. I. R. Savage
Dept. of Statistics
Yale University
New Haven, CT 06520
13. Prof. W. R. Schucany
Dept. of Statistics
Southern Methodist University
Dallas, TX 75222
14. Prof. D. C. Siegmund
Dept. of Statistics
Sequoia Hall
Stanford University
Stanford, CA 94305
15. Prof. H. Solomon 1
Department of Statistics
Sequoia Hall
Stanford University
Stanford, CA 94305
16. Dr. Ed Wegman 1
George Mason University
Fairfax, VA 22030
17. Dr. P. Welch 1
IBM Research Laboratory
Yorktown Heights, NY 10598
18. Dr. NeilGerr 1
Office of Naval Research
Arlington, VA 22217
19. Prof, toy Welsch
Sloan School
M.I.T.
Cambridge, MA 02139
20. Dr. J. Abrahams, Code 1111, Room 607
Mathematical Sciences Division, Office of Naval Research
800 North Quincy Street
Arlington, VA 222175000
21. Prof. J. R. Thompson
Dept. of Mathematical Science
Rice University
Houston, TX 77001
22. Dr. P. Heidelberger
IBM Research Laboratory
Yorktown Heights
New York, NY 10598
23. Prof. M. Leadbetter
Department of Statistics
University of North Carolina
Chapel Hill, NC 27514
24. Prof. D. L. Iglehart,
Dept. of Operations Research
Stanford University
Stanford, CA 94350
25. Prof. J. B. Kadane
Dept. of Statistics
CarnegieMellon University
Pittsburgh, PA 15213
26. Prof. J. Lehoczky
Department of Statistics
CarnegieMellon University
Pittsburgh, PA 15213
28. Prof. M. Mazumdar
Dept. of Industrial Engineering
University of Pittsburgh
Pittsburgh, PA 15235
29. Prof. VI. Rosenblatt
Department of Mathematics
University of California, San Diego
LaJolla,CA 92093
30. Prof. H. Chernoff
Department of Statistics
Harvard University
1 Oxford Street
Cambridge, MA 02138
31. Dr. T. J. Ott
Bellcore,
445 South Street
Morristown, NJ 079621910
(MRE 2P388)
32. Dr. Alan Weiss „
AT&T Bell Laboratories
Mountain Avenue
Murray Hill, NJ 07974
33. Prof. Joseph R. Gani
Mathematics Department
University of California
Santa Barbara, CA, 93106
34. Prof. Frank Samaniego
Statistics Department
University of California
Davis, CA 95616
35. Dr. James McKenna
Bell Communications Research
445 South Street
Morristown, NJ 079601910
35. Dr. Shlomo Halfin
Bellcore,
445 South Street
Morristown, NJ 079621910
(MRE 2L309)
36. Commander
Attn: G. F. Kramer
Naval Ocean Systems Center
Code 421
San Diego, CA 921525000
37. . Prof . Tom A. Louis
School of Public Health
University of Minnesota
Mayo Bldg. A460
Minneapolis, MN 55455
38. Dr. Nan Laird
Biostatistics Dept.
Harvard School of Public Health
677 Huntington Ave.
Boston, MA 02115
39. Dr. Marvin Zelen
Biostatistics Department
Harvard School of Public Health
677 Huntington Ave.
Boston, MA 02115
40. Dr. John Orav
Biostatistics Department
Harvard School of Public Health
677 Huntington Ave.
Boston, MA 02115
41. Prof. R. Douglas Martin
Department of Statistics, GN22
University of Washington
Seattle, WA 98195
42. Prof. W. Stuetzle
Department of Statistics
University of Washington
Seattle, WA 98195
43. Prof. F. W. Mosteller
Department of Statistics
Harvard University
1 Oxford St.
Cambridge, MA 02138
44. Dr. D. C. Hoaglin
Department of Statistics
Harvard University
1 Oxford Street
Cambridge, MA 02138
45. Prof. N. D. Singpurwalla
George Washington University
Washington, DC 20052
46. Prof. George S. Fishman
Curr. in OR & Systems Analysis
University of North Carolina
Chapel Hill, NC 20742
47. Dr. Alan F. Petty
Code 7930
Naval Research Laboratory
Washington, DC 20375
48. Prof. Bradley Efron
Statistics Dept.
Sequoia Hall
Stanford University
Stanford, CA 94305
49. Prof. Carl N. Morris
Statistics Department
Harvard University
1 Oxford St.
Cambridge, MA 02138
50. Dr. John E. Rolph
RAND Corporation
1700 Main St.
Santa Monica, CA 90406
51. Prof. Linda V. Green
Graduate School of Business
Columbia University
New York, NY 10027
52. Dr. David Burman
AT&T Bell Telephone Laboratories
600 Mountain Avenue
Murray Hill, NJ 07974
53. Dr. Edward G. Coffman, Jr
AT&T Bell Telephone Laboratories
600 Mountain Avenue
Murray Hill, NJ 07974
54. Prof. William Jewell
Operations Research Department
University of California, Berkeley
Berkeley, CA 94720
55. Prof. D. C. Siegmund
Dept. of Statistics, Sequoia Hall
Stanford University
Stanford, CA 94305
56. Operations Research Center, Rm E40164
Massachusetts Institute of Technology
Attn: R. C. Larson and J. F. Shapiro
Cambridge, MA 02139
57. Arthur P. Hurter, Jr
Professor and Chairman
Dept. of Industrial Engineering and Management Sciences
Northwestern University
Evanston, IL 602019990
58. Institute for Defense Analysis
1800 North Beauregard
Alexandria, VA 22311
59. Prof. J. W. Tukey
Statistics Dept., Fine Hall
Princeton University
Princeton, NJ 08540
60. Dr. Daniel H. Wagner.
Station Square One
Paoli, PA 19301
61. Dr. Colin Mallows
AT&T Bell Telephone Laboratories
600 Mountain Avenue
Murray Hill, NJ 07974
62. Dr. D. Pregibon
AT&T Bell Telephone Laboratories
Mountain Avenue
Murray Hill, NJ 07974
63. Dr. Jon Kettenring
Bellcore
445 South Street
Morris Township, NJ 079621910
64. Prof. David L. Wallace
Statistics Dept., University of Chicago
5734 S. University Ave.
Chicago, IL 60637
65. Dr. S. R. Dalai
Bellcore
445 South Street
Morristown, NJ 079621910
66. Dr. M. J. Fischer
Defense Communications Agency
1860 Wiehle Avenue
Reston, VA 22070
67. Dr. Prabha Kumar'
Defense Communications Agency
1860 Wiehle Avenue
Reston, VA 22070
68. Dr. B. Doshi
AT&T Bell Laboratories
HO 3M335
Holmdel, NJ 07733
69. Dr. D. M. Lucantoni
AT&T Bell Laboratories
Holdmel, NJ 07733
70. Dr. V. Ramaswami
MRE 2Q358
Bell Communications Research, Inc.
445 South Street
Morristown, NJ 07960
71. Prof. G. Shantikumar
The Management Science Group
School of Business Administration
University of California
Berkeley, CA 94720
72. Dr. D. F. Daley
Statistic Dept. (I.A.S.)
Australian National University
Canberra, A.C.T. 2606
AUSTRALIA
73. Dr. Guy Fayolle
I.N.R.I.A.
Dom de VoluceauRocquencourt
78150 Le Chesnay Cedex
FRANCE
74. Professor H. G. Daellenbach
Department of Operations Research
University of Canterbury
Christchurch, NEW ZEALAND
75. Koh Peng Kong
OA Branch, DSO
Ministry of Defense
Blk 29 Middlesex Road
SINGAPORE 1024
76. Professor Sir David Cox,
Nuffield College
Oxford, OXI INF
ENGLAND
77. Dr. A. J. Lawrence
Dept. of Mathematics,
University of Birmingham
P. O. Box 363
Birmingham B15 2TT
ENGLAND
78. Dr. F. P. Kelly
Statistics Laboratory
16 Mill Lane
Cambridge
ENGLAND
79. Dr. R. J. Gibbens
Statistics Laboratory
16 Mill Lane
Cambridge
ENGLAND
80. Dr. John Copas
Dept. of Mathematics,
University of Birmingham
P. O. Box 363
Birmingham B15 2TT
ENGLAND
81. Dr. D. VereJones
Dept. of Math, Victoria Univ. of Wellington
P. O. Box 196
Wellington
NEW ZEALAND
82. Prof. Guy Latouche
University Libre Bruxelles
C. P. 212, Blvd. De Triomphe
B1050 Bruxelles
BELGIUM
DUDLEY KNOX LIBRARY
II! Ill III Illlll III! I llll ill
3 2768 00337190 7