generating from a failure rate function [X’ed]

While I now try to abstain from participating to the Cross Validated forum, as it proves too much of a time-consuming activity with little added value (in the sense that answers are much too often treated as disposable napkins by users who cannot be bothered to open a textbook and who usually do not exhibit any long-term impact of the provided answer, while clogging the forum with so many questions that the individual entries seem to get so little traffic, when compared say with the stackoverflow forum, to the point of making the analogy with disposable wipes more appropriate!), I came across a truly interesting question the other night. Truly interesting for me in that I had never considered the issue before.

The question is essentially wondering at how to simulate from a distribution defined by its failure rate function, which is connected with the density f of the distribution by

\eta(t)=\frac{f(t)}{\int_t^\infty f(x)\,\text{d}x}=-\frac{\text{d}}{\text{d}t}\,\log \int_t^\infty f(x)\,\text{d}x

From a purely probabilistic perspective, defining the distribution through f or through η is equivalent, as shown by the relation

F(t)=1-\exp\left\{-\int_0^t\eta(x)\,\text{d}x\right\}

but, from a simulation point of view, it may provide a different entry. Indeed, all that is needed is the ability to solve (in X) the equation

\int_0^X\eta(x)\,\text{d}x=-\log(U)

when U is a Uniform (0,1) variable. Which may help in that it does not require a derivation of f. Obviously, this also begs the question as to why would a distribution be defined by its failure rate function.

5 Responses to “generating from a failure rate function [X’ed]”

  1. I am sorry for the unattended thread, since your answer is complete ( I thought it had to take a while before getting to ultimate solution…) Now for question why a r.v. is defined by its failure rate: coming from my work (reliability), an random event is indeed often given with its failure rate. Imagine you have an event that has failure rate $\lambda_1$ before $t_0$, and the failure rate increases when it survives beyond $t_0$. Hence the question, when one needs samples to estimate the model’s parameters

    • Thank you, Tuan. I presume it is a matter of culture: for me, defining a random variable by a failure rate carries no intuition. An additional question is why do you need simulated samples to estimate the model parameters?

      • Yes it is indeed a matter of culture: in reliability, the failure rate can indicate the probability of occurrence at a given time t. If a unit has increasing failure rate, then it is more likely to fail at a large calendar time t, given there is no failure before (and vice versa). The phrase “estimate the model parameters” is hasty-my fault. Imagine you need to build an optimal maintenance policy for the given unit (with known failure rate). Now I need failure data (samples of the r.v. in my question) to calibrate the policy parameters, and to compare with others policies in order to determine which is the best. Hence model parameters should mean the policy parameters, not the probability distribution parameters!

  2. Bob Alvarez Says:

    Interesting topic but I find the white text with black background very hard to read. The equations in particular are almost unintelligible.

    • Sorry about this, wordpress is not a great repository for posting equations… When considering how easy it is to write those entries on Stack Exchange, that’s really a problem! In the current post, the same equations are available on my Cross Validated answer. In a more readable format.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s