\documentclass[12pt]{article}
\usepackage{ bm }
\addtolength{\textheight}{2.0in}
\addtolength{\topmargin}{-1.15in}
\addtolength{\textwidth}{1.2in}
\addtolength{\evensidemargin}{-0.75in}
\addtolength{\oddsidemargin}{-0.7in}
\setlength{\parskip}{0.1in}
\setlength{\parindent}{0.0in}
\pagestyle{empty}
\raggedbottom
\newcommand{\given}{\, | \,}
\begin{document}
\begin{flushleft}
Prof.~David Draper \\
Department of \\
\hspace*{0.1in} Applied Mathematics and Statistics \\
University of California, Santa Cruz
\end{flushleft}
\begin{center}
\textbf{\large AMS 206: Quiz 3 \textit{[12 points, plus 14 extra credit points]}}
\end{center}
\begin{tabular}{ll}
\hspace*{-0.14in} Name: \underline{\hspace*{5.0in}} \\
\end{tabular}
\vspace*{0.1in}
Please supply your answers to the questions below in the spaces provided. If your answers extend to more than two pages, please ensure that each continuation answer identifies the question it's answering on the extra page(s), and (if you're using the scanning option for submission) make sure to scan all pages of your solution for uploading to \texttt{canvas.ucsc.edu}.
\begin{itemize}
\item[(1)]
(\textit{Cromwell's Rule}) As usual notationally, let $\theta$ be the unknown of interest and $D$ be the data set available to You for decreasing Your uncertainty about $\theta$. For simplicity (although more complicated versions of the result examined here also exist), suppose that $\theta$ and $D$ are both binary (which is part of the background information $\mathcal{ B }$ here). For concreteness You can think of screening for a bad outcome $B$, as in the \textit{ELISA} and credit-card case studies: $( \theta = 1 ) =$ (really is $B$), $( \theta = 0 ) =$ (really is not $B$), $( D = 1 ) =$ (screening system says $B$), $( D = 0 ) =$ (screening system says not $B$). You would hope that $\theta$ and $D$ are positively associated, by which I mean that as $D$ goes from 0 to 1, $P ( \theta = 1 \given D \, \mathcal{ B } )$ goes up. Assume that both of the values of $D$ are \textit{a priori} possible, i.e., $P ( D = 0 \given \mathcal{ B } ) > 0$ and $P ( D = 1 \given \mathcal{ B } ) > 0$.
\begin{itemize}
\item[(a)]
Show that if You put prior probability 0 on the proposition $( \theta = 1 )$, $P ( \theta = 1 \given D \, \mathcal{ B } )$ has to be 0, no matter how the data set $D$ comes out. Does this result preserve the hoped-for positive association between $\theta$ and $D$? Explain briefly. \textit{[4 points]}
\vspace*{2.0in}
\item[(b)]
Show that if You put prior probability 1 on the proposition $( \theta = 1 )$, $P ( \theta = 1 \given D \, \mathcal{ B } )$ has to be 1, no matter how the data set $D$ comes out. Does this result preserve the hoped-for positive association between $\theta$ and $D$? Explain briefly. \textit{[4 points]}
\end{itemize}
\newpage
Results (a) and (b), taken together, were named \textit{Cromwell's Rule} by the great British Bayesian DV Lindley (1923--2013).
\begin{itemize}
\item[(c)]
Bayesian reasoning attempts to be a universally valid and useful method for updating Your uncertainty in light of new information. What lesson should we learn from \textit{Cromwell's Rule}, if we wish this attempt to be successful? Explain briefly. \textit{[4 points]}
\vspace*{2.0in}
\end{itemize}
\item[(2)]
\textit{Extra credit [14 total points]:} Consider again problem 2(B) in Take-Home Test 1. The point of this extra-credit problem is to make a few calculations that provide a different perspective on the quality of fit of the Exponential sampling model to the data.
\begin{itemize}
\item[(a)]
Compute the predictive distribution for the next observation $Y_{ n + 1 }$ given $\bm{ y } = ( y_1, \ldots, y_n )$ in model (5) on page 5 of the Take-Home Test. \textit{[6 extra credit points]}
\vspace*{2.0in}
\item[(b)]
Apply your result in (a) to the data set given at the beginning of the Take-Home Test problem with the largest observation (21,194) set aside, using a diffuse Inverse Gamma prior (one way to do this, which involves a tiny amount of cheating by using the data to help specify the prior, is to pick that member of the Inverse Gamma family that has mean equal to (the sample mean with 21,194 omitted) and prior sample size $\epsilon$ for some small $\epsilon$ such as 0.001). Plot the resulting predictive distribution and locate the omitted observation in it. \textit{[6 extra credit points]}
\newpage
\item[(c)]
How strongly, if at all, do Your calculations call into question this model for this data set? Explain briefly. \textit{[2 extra credit points]}
\end{itemize}
\end{itemize}
\end{document}