After Broken Blade and its sequel Bared Blade, Kelly McCullough wrote Crossed Blades that I had ordered along with Bared Blade. And once again I read this volume within a few evenings. It is still very enjoyable, maybe the more given that there is a continuity in the characters and the plots. However, I did prefer Bared Blade to Crossed Blades as the former was creative in terms of plot and environment. Here, in Crossed Blades, the main character Aral is facing his past, from the destruction of his religious order and of his goddess to the possible treachery of former friends and mentors, to his attempt to drown this past in top quality whisky… While dealing with an adopted teenage daughter in the midst of a typical teenage crisis. This new instalment is thus full of introspection and reminiscence of past loves, and frankly a bit dull at times, even though there is a (spoiler warning!!) massive battle against the culprits for the destruction of the order. The very end is a bit disappointing, but it also hopefully closes a chapter in the hero’s life, which means that the next volume, Blade Reforged, may run into new territories and more into simili-detective stories. (Two more books in this Blade series are in the making!)
Archive for book reviews
As mentioned in my recent review of Broken Blade by Kelly McCullough, I had already ordered the sequel Bared Blade. And I read this second volume within a few days. Conditional on enjoying fantasy-world detective stories with supernatural beings popping in (or out) at the most convenient times, this volume is indeed very pleasant with a proper whodunnit, a fairly irrelevant McGuffin, a couple of dryads (that actually turn into…well, no spoiler!), several false trails, a radical variation on the “good cop-bad cop” duo, and the compulsory climactic reversal of fortune at the very end (not a spoiler since it is the same in every novel!). Once again, a very light read, to the point of being almost ethereal, with no pretence at depth or epics or myth, but rather funny and guaranteed 100% free of living-deads, which is a relief. I actually found this volume better than the first one, which is a rarity if you have had enough spare time to read thru my non-scientific book reviews, I am thus looking forward to the next break when I can skip through my next volume of Kelly McCullough, Crossed Blades. (And I hope I will not get more crossed with that one than I was bored with the current volume!)
Over the past few weeks, I read Broken Blade by Kelly McCullough, the start to a series of novels taking place in a fantasy universe and involving the same characters. As in many recent novels I read, the main character Aral Kingslayer is more an anti-hero, not very congenial and rather drawn towards booze and self-loathing. He is one of the last remaining Assassins of a religion which goddess got killed (with very little explanations on how and why this happened). Maybe this is a good enough explanation for his current psychological state, hence the “broken” in the title, but that does not make him more endearing! The story itself is more of a sleuthing one, Aral acting as the detective for hire and another character as the client seeking to recover her inheritance. (With the more unusual add-ons of ghouls and zombies and magics. And the more usual theme of corrupted police officers.) Nothing earth-shattering and still a pleasant ride (that made me miss my metro station once!). As an indicator of how I liked it, I already ordered the sequel Bared Blade. If only to see whether the novelty does wear out… Or not!
About a year ago, I mentioned reading Lawrences’s Prince of Thorns and being rather uneasy about the central anti-hero, a 14-year old at the head of a gang of murderers and worse. I nonetheless bought the second volume, King of Thorns, a few months ago. Once again, I am unhappy about the lack of moral and basic compassion of Jorg and found it difficult to trudge through the ethic morass that King of Thorns represents… In some sense, the character gets more depth and some minimal type of humanity, but most of his actions do not make sense and the added touch of Indiana Jones at some crucial point in the story is just annoying. And I am usually adverse at the mix of science-fiction and fantasy in vague post-apocalyptic universes. Not recommended, despite the flow of highly positive reviews…
Following a now well-established pattern, let me (re)warn (the few) unwary ‘Og readers that the links to Amazon.com and to Amazon.fr found on this blog are actually susceptible to earn me a monetary gain [from 4% to 8% on the sales] if a purchase is made by the reader in the 24 hours following the entry on Amazon through this link, thanks to the “Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com/fr“. Unlike the pattern of last year, and of the year before last, the mostly purchased item through the links happens to be related to a blog post, since it is Andrew’s book, with 318 copies of its third edition sold through the ‘Og last month! Here are some of the most exotic purchases:
- A Melon for extasy
- Simply Napkins
- Yummy earth lollipops (Mark, is it you?!)
- Cliff ojo peanut butter pretzel (peanut butter AND pretzels?!)
- Sharps container biohazard needle disposal
- Swissmar borner V power mandoline
- Ricochet robots
As usual the books I actually reviewed along the past months, positively or negatively, were among the top purchases… Like two dozen copies of The BUGS book. And a dozen of R for dummies. And even a few of The Cartoon Introduction to Statistics. (Despite a most critical review.) Thanks to all of you using those links (for feeding further my book addiction, books that now eventually end up in the math common room in Dauphine or Warwick, once I have read them)!
[Those are comments sent yesterday by Shravan Vasishth in connection with my post. Since they are rather lengthy, I made them into a post. Shravan is also the author of The foundations of Statistics and we got in touch through my review of the book . I may address some of his points later, but, for now, I find the perspective of a psycholinguist quite interesting to hear.]
Christian, Is the problem for you that the p-value, however low, is only going to tell you the probability of your data (roughly speaking) assuming the null is true, it’s not going to tell you anything about the probability of the alternative hypothesis, which is the real hypothesis of interest.
However, limiting the discussion to (Bayesian) hierarchical models (linear mixed models), which is the type of model people often fit in repeated measures studies in psychology (or at least in psycholinguistics), as long as the problem is about figuring out P(θ>0) or P(θ>0), the decision (to act as if θ>0) is going to be the same regardless of whether one uses p-values or a fully Bayesian approach. This is because the likelihood is going to dominate in the Bayesian model.
Andrew has objected to this line of reasoning by saying that making a decision like θ>0 is not a reasonable one in the first place. That is true in some cases, where the result of one experiment never replicates because of study effects or whatever. But there are a lot of effects which are robust and replicable, and where it makes sense to ask these types of questions.
One central issue for me is: in situations like these, using a low p-value to make such a decision is going to yield pretty similar outcomes compared to doing inference using the posterior distribution. The machinery needed to do a fully Bayesian analysis is very intimidating; you need to know a lot, and you need to do a lot more coding and checking than when you fit an lmer type of model.
It took me 1.5 to 2 years of hard work (=evenings spent not reading novels) to get to the point that I knew roughly what I was doing when fitting Bayesian models. I don’t blame anyone for not wanting to put their life on hold to get to such a point. I find the Bayesian method attractive because it actually answers the question I really asked, namely is θ>0 or θ<0? This is really great, I don’t have beat around the bush any more! (there; I just used an exclamation mark). But for the researcher unwilling (or more likely: unable) to invest the time into the maths and probability theory and the world of BUGS, the distance between a heuristic like a low p-value and the more sensible Bayesian approach is not that large.
L‘œuvre au noir (The Abyss) is a 1968 book written by Marguerite Yourcenar I read decades ago and took with me this summer. It tells the story of Zeno(n), a mediaeval precursor of the Renaissance humanist, involved in medicine, alchemy, engineering and philosophy, but above all fighting or at least resisting the pressure of irrational beliefs and superstitions until they lead him to suicide. As acknowledged by Yourcenar in her notes, the character borrows from Renaissance scientists like Erasme, Giordano Bruno, Mikołaj Copernic, Leonardo da Vinci, and medical pioneers like Paracelsus (very much like Paracelsus!), Michel Serat and Etienne Dolet. Zenon is an atheist at a time when atheism is punished by burning at the stake, and an experimenter in an epoch when alchemy and dissection were assimilated to sorcery. The original title (translated as nigredo) is the first of the three steps in the alchemist transmutation process but also applies to the transformation of Zenon from what the society planned for him into a free and rational man. So free that he could choose himself the time and manner of his death. So rational that he reached a spiritual solitude that made him see his fellow humans with the doctor’s detached compassion and the philosopher’s pessimistic analysis of their superstitions. (The English title is just missing the point!)
This is a 20th century novel (on which Yourcenar tolled for many years, from three short stories to the final version), which makes the highly modern vision of the imaginary Zenon less remarkable than the steps made by the above real characters, but the text abounds in remarkable discussions and monologues that reminded me of similar passages in Memoirs of Hadrian. Both books are centred on (impossibly and unrealistically) exceptional men with visions that set them out of their historical time. The fate of Zenon is somehow underlying the whole book and his weak and failed attempt at fleeing Bruges and the Inquisition can be understood at the first step towards his philosopher’s suicide, preferring to face the ecclesiastical tribunal and debate of some of his ideas than Flemish smugglers and an inglorious end by being tossed into the North Sea. L‘œuvre au noir is a remarkable if pessimistic book that reflects on science and intolerance in a beautiful style, a book that I put in par with the equally great Memoirs of Hadrian.
The title of this book Informative Hypotheses somehow put me off from the start: the author, Hebert Hoijtink, seems to distinguish between informative and uninformative (deformative? disinformative?) hypotheses. Namely, something like
is “very informative” and unrealistic, and the alternative Ha is completely uninformative, while the “alternative null”
is informative. (Hence the < signs on the cover. One of my book reviews idiosyncrasies is to find hidden meaning behind the cover design…) The idea is thus to have the researcher give some input in the construction of the null hypothesis (as if hypothesis tests usually were not about questions that mattered….).
In fact, this distinction put me off so much that I only ended up reading chapters 1 (an introduction), 3 (an introduction [to the Bayesian processing of such hypotheses]) and 10 (on Bayesian foundations of testing informative hypotheses). Hence a very biased review of Informative Hypotheses that follows….
Given an existing (but out of print?) reference like Robertson, Wright and Dykjstra (1988), that I particularly enjoyed when working on isotonic regression in the mid 90’s, I do not see much of an added value in the present book. The important references are mostly centred on works by the author and his co-authors or students (often Unpublished or In Press), which gives me the impression the book was hurriedly gathered from those papers.
“The Bayes factor (…) is default, objective, based on an appropriate quantification of complexity.” (p.197)
The first chapter of Informative Hypotheses is a motivation for the study of those informative hypotheses, with a focus on ANOVA models. There is not much in the chapter that explains what is so special about those ordering (null) hypotheses and why a whole book is required to cover their processing. A noteworthy specificity of the approach, nonetheless, is that point null hypotheses seem to be replaced with “about equality constraints” (p.9), |μ2-μ3|<d, where d is specified by the researcher as significant. This chapter also gives illustrations of ordered (or informative) hypotheses in the settings of analysis of covariance (ANCOVA) and regression models, but does not indicate (yet) how to run the tests. The concluding section is about the epistemological focus of the book, quoting Popper, Sober and Carnap, although I do not see much of a support in those quotes.
“Objective means that Bayes factors based on this prior distribution are essentially independent of this prior distribution.” (p.53)
Chapter 3 starts the introduction to Bayesian statistics with the strange idea of calling the likelihood the “density of the data”. It is indeed the probability density of the model evaluated at the data but… it conveys a confusing meaning since it is not a density when plotted against the parameters (as in Figure 1, p. 44, where, incidentally the exact probability model is not specified). The prior distribution is defined as a normal x inverse chi-square distribution on the vector of the means (in the ANOVA model) and the common variance. Due to the classification of the variance as a nuisance parameter, the author can get away with putting an improper prior on this parameter (p.46). The normal prior is chosen to be “neutral”, i.e. to give the same prior weight to the null and the alternative hypotheses. This seems logical at some initial level, but constructing such a prior for convoluted hypotheses may simply be impossible… Because the null hypothesis has a positive mass (maybe .5) under the “unconstrained prior” (p.48), the author can also get away with projecting this prior onto the constrained space of the null hypothesis. Even when setting the prior variance to oo (p.50). The Bayes factor is then the ratio of the (posterior and prior) normalising constants over the constrained parameter space. The book still mentions the Lindley-Bartlett paradox (p.60) in the case of the about equality hypotheses. The appendix to this chapter mentions the issue of improper priors and the need for accommodating infinite mass with training samples, providing a minimum training sample solution using mixtures that sound fairly ad hoc to me.
“Bayes factors for the evaluation of informative hypotheses have a simple form.” (p. 193)
Chapter 10 is the final chapter of Informative Hypotheses, on “Foundations of Bayesian evaluation of informative hypotheses”, and I was expecting a more in-depth analysis of those special hypotheses, but it is mostly a repetition of what is found in Chapter 3, the wider generality being never exploited to a useful depth. There is also this gem quoted above that, because Bayes factors are the ratio of two (normalising) constants, fm/cm, they have a “simple form”. The reference to Carlin and Chib (1995) for computing other cases then sounds pretty obscure. (Another tiny gem is that I spotted the R software contingency spelled with three different spellings.) The book mentions the Savage-Dickey representation of the Bayes factor, but I could not spot the connection from the few lines (p.193) dedicated to this ratio. More generally, I do not find the generality of this chapter particularly convincing, most of it replicating the notions found in Chapter 3., like the use of posterior priors. The numerical approximation of Bayes factors is proposed via simulation from the unconstrained prior and posterior (p.207) then via a stepwise decomposition of the Bayes factor (p.208) and a Gibbs sampler that relies on inverse cdf sampling.
Overall, I feel that this book came out too early, without a proper basis and dissemination of the ideas of the author: to wit, a large number of references are connected to the author, some In Press, other Unpublished (which leads to a rather abstract “see Hoijtink (Unpublished) for a related theorem” (p.195)). From my incomplete reading, I did not gather a sense of novel perspective but rather of a topic that seemed too narrow for a whole book.