Evaluating Parameters and Survival Function in the Exponential Distribution Model: A Contrast Between Complete and Censored Data

Evaluating Parameters and Survival Function in the Exponential Distribution Model: A Contrast Between Complete and Censored Data

Haneen Raad Sahib* Hadeel Salim Al-Kutubi

Department of Mathematics, Faculty of Computer Science and Mathematics, University of Kufa, Al-Najaf 54001, Iraq

Corresponding Author Email: 
haneenr.aljazaeri@student.uokufa.edu.iq
Page: 
2063-2068
|
DOI: 
https://doi.org/10.18280/mmep.100616
Received: 
4 February 2023
|
Revised: 
10 April 2023
|
Accepted: 
2 May 2023
|
Available online: 
21 December 2023
| Citation

© 2023 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

This study presents a derivation of the parameter and survival function in the exponential distribution of lifetime data, comparing results obtained from complete data and censored data. The latter includes both time censored sampling (Type I censored data) and failure censored sampling (Type II censored data). Parameter estimation and survival function were approached via two distinct methods, the maximum likelihood method and the Bayes method. Simulation outcomes indicated that the use of complete data yielded superior results in terms of mean square error (MSE) and mean percentage error (MPE) for both the model parameter and the survival function. This study provides valuable insights into the efficacy of data types and estimation methods in survival analysis within the exponential distribution model.

Keywords: 

bayes estimation method, maximum likelihood estimation methods, complete data, Type I censored data, Type II censored data

1. Introduction

The Probabilistic technique of estimate and the maximum likelihood approach were both discussed in this research. The beginning is with the maximum likelihood method, where the parameter of the exponential distribution and the survival function of the exponential model are derived using the complete data first, and then using the censored data of its I and II types.

The second step is to derive the parameter of the exponential model and the survival function of the exponential distribution based on the Bayes method of estimation, based on the censored data of its I and II types, and also using complete data.

The last step is the comparison between the model parameters and the exponential distribution's survival functions, calculated using MSE and MPE values derived from simulations. are used to determine which parameters and survival functions perform best.

2. Maximum Likelihood Estimators

2.1 Complete data

Let $t_1, t_2, \ldots, t_n$ be the set of random life time from exponential distribution. $\mathrm{n}$ be items subjected to test and the test is terminated after all the items have failed [1-6]: Suppose the failure times are distributed with p.d.f. f(ti, θ) is given by:

$f\left(t_i, \theta\right)=\frac{1}{\theta} e^{\frac{-t_i}{\theta}} \quad t_i \in(0, \infty)$      (1)

the likelihood function is given by:

$L\left(t_i, \theta\right)=\prod_{i=1}^n \frac{1}{\theta} e^{\frac{-t_i}{\theta}}=\theta^{-n} \times e^{\frac{-\sum_{i=1}^n t_i}{\theta}}$       (2)

the Ln_ likelihood function is given by:

$\begin{aligned} \ln L\left(t_i, \theta\right)=\ln [ & \left.\theta^{-n} \times e^{\frac{-\sum_{i=1}^n t_i}{\theta}}\right] \\ & =\ln \theta^{-n} \\ & +\ln e^{\frac{-\sum_{i=1}^n t_i}{\theta}}=-n \ln \theta-\frac{\sum_{i=1}^n t_i}{\theta}\end{aligned}$     (3)

differentiating with respect to θ gives,

$\frac{\partial \ln L\left(t_i, \theta\right)}{\partial \theta}=\frac{-n}{\theta}+\frac{\sum_{i=1}^n t_i}{\theta^2}$      (4)

to find the estimator of parameters $\hat{\theta}_M$, we solve:

$\frac{\partial \ln L\left(t_i, \theta\right)}{\partial \theta}=0$, then, $\hat{\theta}_M=\frac{\sum_{i=1}^n t_i}{n}$    (5)

then, the estimator of survival function is:

$\hat{s}_M(t)=\exp \left[\frac{-t_0}{\hat{\theta}}\right]=\exp \left[\frac{-n t_0}{\sum_{i=1}^n t_i}\right]$    (6)

2.2 Time censored sampling (type 2.1 censored data)

If $n$ items to test and terminate the exponential at a preassigned time $t_0$ So data consist of the life times of items failed before $t_0$, say $t_1, t_2, \ldots, t_m$, i.e., $m$ items failed before $t_0$ and n-m items have survived beyond $t_0$. The likelihood of $\theta$ under Type I censoring is given by references [7-11]:

\begin{gathered}

\left(t_1, t_2, \ldots, t_m, m\right)=\frac{n !}{(n-m) !} \prod_{i=1}^m f\left(t_i, \theta\right)\left[s\left(t_0\right)\right]^{n-m} \\

0 \leq t_1 \leq \cdots \leq t_m<t_0

\end{gathered}       (7)

the pdf of exponential distribution is given by:

$f\left(t_i, \theta\right)=\frac{1}{\theta} \exp \left[\frac{-t_i}{\theta}\right]$     (8)

the survival of t0 is given by:

$S\left(t_0\right)=\exp \left(\frac{-t_0}{\theta}\right)$    (9)

$\begin{gathered}L\left(t_1, t_2, \ldots, t_m, m\right)= \\ \frac{n !}{(n-m) !} \prod_{i=1}^m\left[\frac{1}{\theta} \exp \left(\frac{-t_i}{\theta}\right)\right]\left[\exp \left(\frac{-t_0}{\theta}\right)\right]^{n-m}, \text { then, } \\ L\left(t_1, t_2, \ldots ., t_m, m\right) \\ =\frac{n !}{(n-m) !}\left[\theta^{-m} \times \exp \left(\frac{-\sum_{i=1}^m t_i}{\theta}\right)\right]\left[\exp \left(\frac{-t_0}{\theta}\right)\right]^{n-m}\end{gathered}$        (10)

$\ln L\left(t_i, \theta\right)=\frac{n !}{(n-m) !}\left[\begin{array}{c}-m \ln \theta \\ -\sum_{i=1}^m \frac{t_i}{\theta}-(n-m) \frac{t_0}{\theta}\end{array}\right]$     (11)

differentiating with respect to θ gives:

$\begin{gathered}\frac{\partial \ln L\left(t_i, \theta\right)}{\partial \theta}=\frac{n !}{(n-m) !}\left[\frac{-m}{\hat{\theta}}+\frac{\sum_{i=1}^m t_i}{\hat{\theta}^2}\right.  \left.+\frac{(n-m) t_0}{\hat{\theta}^2}\right]\end{gathered}$      (12)

to find the estimator of parameters $\widehat{\theta}_M$, we solve $\frac{\partial \ln \mathrm{L}\left(t_i, \theta\right)}{\partial \theta}=$ 0 , then,

$\widehat{\theta}_M=\frac{\sum_{i=1}^m t_i+(n-m) t_0}{m}$       (13)

then the estimator of survival function is:

$\hat{S}_M\left(t_0\right)=\exp \left(\frac{-t_0}{\hat{\theta}}\right)=\exp \left[\frac{-m t_0}{\sum_{i=1}^m t_i+(n-m) t_0}\right]$      (14)

2.3 Failure censored sampling (type 2.2 censored data)

Let $t_1, t_2, \ldots, t_r$ be the set of random lifetime, the number of items that failed is fixed ( $r$ is fixed) while $t r$, the time at which the experiment is terminated is a random variable. $n$ is random sample units are set on life_testing experimentation. $(n-r)$ its Remaining random sample values. $t_i$ denotes the lifetime failure time of ith items, $\theta$ is parameter of the distribution:

The likelihood of $\theta$ under type 2.2 censoring is given by:

$\begin{gathered}L\left(t_1, t_2, \ldots, t_r, \theta\right)=\frac{n !}{(n-r) !} \prod_{i=1}^r f\left(t_i, \theta\right)\left[s\left(t_r\right)\right]^{n-r} \\ 0 \leq t_1 \leq \cdots \leq t_r\end{gathered}$    (15)

the pdf of exponential distribution is given by:

$f\left(t_i, \theta\right)=\frac{1}{\theta} \exp \left[\frac{-t_i}{\theta}\right]$     (16)

$S\left(t_r\right)=\exp \left(\frac{-t_r}{\theta}\right)$      (17)

$\begin{aligned} & L\left(t_i, \theta\right)=\frac{n !}{(n-r) !} \prod_{i=1}^r\left[\frac{1}{\theta} \exp \left(\frac{-t_i}{\theta}\right)\right]\left[\exp \left(\frac{-t_r}{\theta}\right)\right]^{n-r} \\ & =\frac{n !}{(n-r) !}\left[\theta^{-r} \times \exp \left(\frac{-\sum_{i=1}^r t_i}{\theta}\right)\right]\left[\exp \left(\frac{-t_r}{\theta}\right)\right]^{n-r}\end{aligned}$      (18)

$\begin{gathered}\ln L\left(t_i, \theta\right)= \\ \frac{n !}{(n-r) !}\left[-r \ln \theta-\sum_{i=1}^r \frac{t_i}{\theta}-(n-r) \frac{t_r}{\theta}\right]\end{gathered}$       (19)

differentiating with respect to θ gives:

$\frac{\partial \ln L\left(t_i, \theta\right)}{\partial \theta}=\frac{n !}{(n-r) !}\left[\frac{-r}{\hat{\theta}}+\frac{\sum_{i=1}^r t_i}{\hat{\theta}^2}+\frac{(n-r) t_r}{\hat{\theta}^2}\right]$     (20)

to find the estimator of parameters $\hat{\theta}_M$, we solve $\frac{\partial \ln \mathrm{L}\left(t_i, \theta\right)}{\partial \theta}=0$, then:

$\hat{\theta}_M=\frac{\sum_{i=1}^r t_i+(n-r) t_r}{r}$       (21)

also, the estimator of survival function is:

$\hat{S}_M\left(t_0\right)=\exp \left(\frac{-t_0}{\hat{\theta}}\right)=\exp \left[\frac{-r t_0}{\sum_{i=1}^r t_i+(n-r) t_r}\right]$   (22)

3. Bayes Estimator

3.1 Complete data

Let $t_1, t_2, \ldots, t_n$ be the expected number of years a sample of size $n$ will live fa $\left(t_i, \theta\right)$. Consider the one parameter exponential lifetime distribution [1-5, 12-14]: We find Fisher by probability function.

$f\left(t_i, \theta\right)=\frac{1}{\theta} \exp \left[\frac{-t_i}{\theta}\right]$       (23)

$\begin{gathered}\ln f\left(t_i, \theta\right)=\ln \left[\frac{1}{\theta} e^{\frac{-t_i}{\theta}}\right]=\ln \frac{1}{\theta}+\ln e^{\frac{-t_i}{\theta}} \\ =-\ln \theta-\frac{t_i}{\theta}\end{gathered}$    (24)

$\frac{\partial \ln f\left(t_i, \theta\right)}{\partial \theta}=\frac{-1}{\theta}+\frac{t_i}{\theta^2}$  (25)

$\frac{\partial^2 \ln f\left(t_i, \theta\right)}{\partial \theta^2}=\frac{1}{\theta^2}-\frac{2 t_i}{\theta^3}$  (26) 

$E\left(\frac{\partial^2 \ln f\left(t_i, \theta\right)}{\partial \theta^2}\right)=E\left(\frac{1}{\theta^2}\right)-E\left(\frac{2 t_i}{\theta^3}\right)=\frac{1}{\theta^2}-\frac{2}{\theta^2}=\frac{-1}{\theta^2}$       (27)

$I(\theta)=-n E\left(\frac{\partial^2 \ln f\left(t_i, \theta\right)}{\partial \theta^2}\right)=-n \times \frac{-1}{\theta^2}=\frac{n}{\theta^2}$   (28)

We find Jeffery prior by taking $\mathrm{g}(\theta) \propto \sqrt{\mathrm{I}(\theta)}$, then Jeffery prior information is:

$g(\theta)=k \frac{\sqrt{n}}{\theta}$       (29)

$k$ is constant. The joint probability density function $\mathrm{f}\left(\mathrm{t}_1, \mathrm{t}_2, \ldots ., \mathrm{t}_{\mathrm{n}}, \theta\right)$ is given by $H\left(t_1, t_2, \ldots, t_n, \theta\right)=$ $\prod_{i=1}^n f\left(t_i, \theta\right) g(\theta)$:

$\begin{aligned} H\left(t_1, t_2, \ldots ., t_n, \theta\right) & =\frac{1}{\theta^n} \exp \left(\frac{-\sum_{i=1}^n t_i}{\theta}\right) k \frac{\sqrt{n}}{\theta} \\ & =\frac{k \sqrt{n}}{\theta^{n+1}} \exp \left(\frac{-\sum_{i=1}^n t_i}{\theta}\right)\end{aligned}$           (30)

The marginal probability density function of $\theta$ given the data $\left(\mathrm{t}_1, \mathrm{t}_2, \ldots ., \mathrm{t}_{\mathrm{n}}\right)$ is:

$\begin{aligned} & P\left(t_1, t_2, \ldots ., t_n\right)=\int H\left(t_1, t_2, \ldots ., t_n, \theta\right) d \theta=  \int_0^{\infty} \frac{k \sqrt{n}}{\theta^{n+1}} \exp \left(\frac{-\sum_{i=1}^n t_i}{\theta}\right) d \theta=\frac{k \sqrt{n}(n-1) !}{\left(\sum_{i=1}^n t_i\right)^n}\end{aligned}$  (31)

A distribution of $\theta$ if and only if certain conditions hold $\left(t_1, t_2, \ldots, t_n\right)$ is given by,

$\begin{array}{r}\prod\left(\theta \mid t_1, t_2, \ldots, t_n\right)=\frac{\mathrm{H}\left(t_1, t_2, \ldots ., t_n, \theta\right)}{\mathrm{P}\left(t_1, t_2, \ldots ., t_n\right)} \\ =\frac{\exp \left(\frac{-\sum_{i=1}^n t_i}{\theta}\right)}{\theta^{n+1}} \cdot \frac{\left(\sum_{i=1}^n t_i\right)^n}{(n-1) !}\end{array}$      (32)

By using squared error loss function $\ell(\theta-\theta)=\mathrm{c}(\theta-\theta)^2$, we can obtain the Risk function, such that:

$\begin{gathered}\mathrm{R}(\theta-\theta)=\int_0^{\infty} \ell(\theta-\theta) \prod\left(\theta \mid t_1, t_2, \ldots ., t_n\right) \mathrm{d} \theta \\ =\int_0^{\infty}\left(c \widehat{\theta^2}-2 \mathrm{c} \hat{\theta} \theta+I(\theta)\left(\frac{\exp \left(\frac{-\sum_{i=1}^n t_i}{\theta}\right)\left(\sum_{i=1}^n t_i\right)^n}{\left(\theta^{n+1}\right)(n-1) !}\right) \mathrm{d} \theta\right.\end{gathered}$    (33)

$\begin{aligned} & \frac{\partial R(\hat{\theta}, \theta)}{\partial \hat{\theta}} =\hat{\theta}-\frac{\left(\sum_{i=1}^n t_i\right)^n}{(n-1) !} \int_0^{\infty} \theta^{-n} \exp \left(\frac{-\sum_{i=1}^n t_i}{\theta}\right) \mathrm{d} \theta\end{aligned}$       (34)

$\begin{aligned} & \frac{R(\widehat{\theta}, \theta)}{\partial \widehat{\theta}}=0, \text { then } \\ & \begin{aligned} \hat{\theta}_B & =\frac{\left(\sum_{i=1}^n t_i\right)^n}{(n-1) !} \int_0^{\infty}\left(\frac{\sum_{i=1}^n t_i}{y}\right)^{-n} \exp (-y)\left(\frac{\sum_{i=1}^n t_i}{y^2}\right) \mathrm{dy} \\ & =\frac{\left(\sum_{i=1}^n t_i\right)}{(n-1)}\end{aligned}\end{aligned}$          (35)

$\begin{aligned} \hat{S}_B(\mathrm{t}) & =\int_0^{\infty} \exp \left(\frac{-t_i}{\theta}\right) \prod\left(\theta \mid t_1, t_2, \ldots ., t_n\right) \mathrm{d} \theta \\ & =\int_0^{\infty} \exp \left(\frac{-t_i}{\theta}\right) \frac{\exp \left(\frac{-\sum_{i=1}^n t_i}{\theta}\right)}{\theta^{n+1}} \cdot \frac{\left(\sum_{i=1}^n t_i\right)^n}{(n-1) !} \mathrm{d} \theta \\ & =\frac{\left(t_i+\sum_{i=1}^n t_i\right)^{-n}}{\left(\sum_{i=1}^n t_i\right)^{-n}}\end{aligned}$         (36)

3.2 Time censored data (type 3.1 censored data)

Let $t_1, t_2, \ldots, t_n$ be the lifetime of a random sample of size n with probability function f(ti, θ) [10, 11, 15]. Take the exponential life-span distribution with a single parameter into consideration.

$f\left(t_i, \theta\right)=\frac{1}{\theta} \exp \left[\frac{-t_i}{\theta}\right]$    (37)

We locate Jeffery in advance by $\operatorname{wg}(\theta) \propto \sqrt{\mathrm{I}(\theta)}$, where,

$\mathrm{I}(\theta)=-\mathrm{nE}\left(\frac{\partial^2 \ln \mathrm{f}\left(\mathrm{t}_{\mathrm{i}}, \theta\right)}{\partial \theta^2}\right)=\frac{\mathrm{n}}{\theta^2}$    (38)

then, $g(\theta)=\mathrm{k} \frac{\sqrt{n}}{\theta}$     (39)

k is constant. The joint probability density function $f\left(t_1, t_2, \ldots, t_n, \theta\right)$ is given by:

$\begin{aligned} & \left(t_1, t_2, \ldots ., t_n, \theta\right)=\frac{n !}{(n-m) !} \prod_{i=1}^m f\left(t_i, \theta\right)\left[s\left(t_0\right)\right]^{n-m} \\ & \mathrm{~L}\left(t_1, t_2, \ldots, t_n, \theta\right)  =\frac{n !}{(n-m) !}\left[\theta^{-m} \exp \left(\frac{-\sum_{i=1}^m t_i}{\theta}\right)\right]\left[\exp \left(\frac{-t_0}{\theta}\right)\right]^{n-m}\end{aligned}$        (40)

$\begin{aligned} & \left(H, t_2, \ldots, t_n, \theta\right)=\prod_{i=1}^n f\left(t_i, \theta\right) g(\theta)  =\frac{n !}{(n-m) !}\left[\theta^{-m} \exp \left(\frac{-\sum_{i=1}^m t_i}{\theta}\right)\right] \cdot\left[\exp \left(\frac{-t_0}{\theta}\right)\right]^{n-m} \cdot \mathrm{k} \frac{\sqrt{n}}{\theta} \\ & \mathrm{H}\left(t_1, t_2, \ldots, t_n, \theta\right)=\frac{k \sqrt{n} \cdot n !}{(n-m) !} \cdot \frac{\exp \left(\frac{-\sum_{i=1}^m t_i}{\theta}\right)\left[\exp \left(\frac{-t_0}{\theta}\right)\right]^{n-m}}{\theta^{m+1}}\end{aligned}$         (41)

The marginal probability density function of θ given the data (t1, t2, …, tn) is:

$\begin{aligned} & \mathrm{P}\left(t_1, t_2, \ldots, t_n\right)=\int \mathrm{H}\left(t_1, t_2, \ldots, t_n, \theta\right) \mathrm{d} \theta  =\int_0^{\infty} \frac{k \sqrt{n} \cdot n !}{(n-m) !} \cdot \frac{\exp \left(\frac{-\sum_{i=1}^m t_i}{\theta}\right)\left[\exp \left(\frac{-t_0}{\theta}\right)\right]^{n-m}}{\theta^{m+1}} \mathrm{~d} \theta \\ & =\frac{k \sqrt{n} \cdot n !}{(n-m) !} \int_0^{\infty} \exp \left[\frac{\sum_{i=1}^m t_i+t_0(n-m)}{\theta}\right] \theta^{-(m+1)} \mathrm{d} \theta  =\frac{k \sqrt{n} \cdot n !}{\left(\sum_{i=1}^m t_i+t_0(n-m)\right)^m \cdot(n-m) !} \int_0^{\infty} \exp (-y) \cdot y^{m-1} \mathrm{dy} \\ & =\frac{k \sqrt{n} \cdot n ! \cdot(m-1) !}{\left(\sum_{i=1}^m t_i+t_0(n-m)\right)^m \cdot(n-m) !}\end{aligned}$        (42)

probability density function (pdf) conditioned on data (t1, t2, …, tn) is given by,

$\begin{aligned} & \mathrm{\prod}\left(\theta \mid t_1, t_2, \ldots ., t_n\right)=\frac{\mathrm{H}\left(t_1, t_2, \ldots, t_n, \theta\right)}{\mathrm{P}\left(t_1, t_2, \ldots, t_n\right)} =\frac{\frac{k \sqrt{n} \cdot n !}{(n-m) !} \cdot \frac{\exp \left(\frac{-\sum_{i=1}^m t_i}{\theta}\right)\left[\exp \left(\frac{-t_0}{\theta}\right)\right]^{n-m}}{\theta^{m+1}}}{\frac{k \sqrt{n} \cdot n ! \cdot(m-1) !}{\left(\sum_{i=1}^m t_i+t_0(n-m)\right)^m \cdot(n-m) !}} \\ & =  \frac{\left(\sum_{i=1}^m t_i+t_0(n-m)\right)^m \cdot \exp \left(\frac{-\sum_{i=1}^m t_i}{\theta}\right)\left[\exp \left(\frac{-t_0}{\theta}\right)\right]^{n-m}}{\theta^{m+1} \cdot(m-1) !} \\ & \end{aligned}$          (43)

By using squared error loss function $\ell(\theta-\theta)=\mathrm{c}(\theta-\theta)^2$, we can obtain the Risk, function, such that:

$\begin{aligned} & R(\theta-\theta)=\int_0^{\infty} \ell(\theta-\theta) \prod\left(\theta, t_1, t_2, \ldots, t_n\right) d \theta \\ & =\int_0^{\infty} c(\hat{\theta}-\theta)^2 \frac{\left(\sum_{i=1}^m t_i+t_0(n-m)\right)^m}{(m-1) !} . \\ & \frac{\exp \left(\frac{-\sum_{i=1}^m t_i}{\theta}\right)\left[\exp \left(\frac{-t_0}{\theta}\right)\right]^{n-m}}{\theta^{m+1}} d \theta \\ & =\int_0^{\infty}\left(c \widehat{\theta^2}-2 c \hat{\theta} \theta+\zeta(\theta)\right) \frac{\left(\sum_{i=1}^m t_i+t_0(n-m)\right)^m}{(m-1) !} \\ & \frac{\exp \left(\frac{-\sum_{i=1}^m t_i}{\theta}\right)\left[\exp \left(\frac{-t_0}{\theta}\right)\right]^{n-m}}{\theta^{m+1}} d \theta\end{aligned}$           (44)

$\begin{aligned} & \frac{\partial R(\hat{\theta}, \theta)}{\partial \hat{\theta}}=2 c \hat{\theta}-2 c \frac{\left(\sum_{i=1}^m t_i+t_0(n-m)\right)^m}{(m-1) !} \\ & \int_0^{\infty} \theta^{-m} \exp \left(\frac{-\sum_{i=1}^m t_i}{\theta}\right)\left[\exp \left(\frac{-t_0}{\theta}\right)\right]^{n-m} d \theta \frac{\partial R(\hat{\theta}, \theta)}{\partial \hat{\theta}}= \\ & 0, \text { then, }\end{aligned}$   (45)

$\begin{aligned} & \hat{\theta}=\frac{\left(\sum_{i=1}^m t_i+t_0(n-m)\right)^m}{(m-1) !} \\ & \quad \int_0^{\infty} \theta^{-m} \exp \left[-\left(\frac{\sum_{i=1}^m t_i+t_0(n-m)}{\theta}\right)\right] \mathrm{d} \theta \\ & \hat{\theta}_B=\frac{\left(\sum_{i=1}^m t_i+t_0(n-m)\right)^m}{(m-1) !} \\ & \int_0^{\infty}\left(\frac{\sum_{i=1}^m t_i+t_0(n-m)}{y}\right)^{-m} \exp (-y)\left(\frac{\sum_{i=1}^m t_i+t_0(n-m)}{y^2}\right) \mathrm{dy} \\ & \quad=\frac{\left(\sum_{i=1}^m t_i+t_0(n-m)\right)^m}{(m-1) !} \int_0^{\infty} \frac{\left(\sum_{i=1}^m t_i+t_0(n-m)\right)^{-m+1}}{y^{-m+2}}\end{aligned}$     (46)

$\exp (-\mathrm{y}) \mathrm{dy}=\frac{\sum_{i=1}^m t_i+t_0(n-m)}{(m-1)}$

$\begin{aligned} & \hat{S}_B(\mathrm{t})=\int_0^{\infty} \exp \left(\frac{-t_i}{\theta}\right) \prod\left(\theta \mid t_1, t_2, \ldots, t_n\right) \mathrm{d} \theta= \\ & \int_0^{\infty} \exp \left(\frac{-\mathrm{t}_{\mathrm{i}}}{\theta}\right) \frac{\left(\sum_{i=1}^m t_i+t_0(n-m)\right)^m \cdot \exp \left(\frac{-\sum_{i=1}^m t_i}{\theta}\right)\left[\exp \left(\frac{-t_0}{\theta}\right)\right]^{n-m}}{\theta^{m+1} \cdot(m-1) !} \mathrm{d} \theta \\ & = \\ & \frac{\left(\sum_{\mathrm{i}=1}^{\mathrm{m}} \mathrm{t}_{\mathrm{i}}+\mathrm{t}_0(\mathrm{n}-\mathrm{m})\right)^{\mathrm{m}}}{(\mathrm{m}-1) !} \int_0^{\infty} \theta^{-(\mathrm{m}+1)} \exp \left[\frac{-\left(\mathrm{t}_{\mathrm{i}}+\sum_{\mathrm{i}=1}^{\mathrm{m}} \mathrm{t}_{\mathrm{i}}+\mathrm{t}_0(\mathrm{n}-\mathrm{m})\right)}{\theta}\right] \mathrm{d} \theta \\ & \hat{S}_B(\mathrm{t})=\frac{\left(\sum_{i=1}^m t_i+t_0(n-m)\right)^m}{(m-1) !} \\ & \quad \int_0^{\infty}\left(\frac{\sum_{i=1}^m t_i+t_i+t_0(n-m)}{y}\right)^{-(m+1)} \\ & \quad \exp (-\mathrm{y})\left(\frac{\sum_{i=1}^m t_i+t_i+t_0(n-m)}{y^2}\right) \mathrm{dy} \\ & =\frac{\left(\sum_{i=1}^m t_i+t_0(n-m)\right)^m}{\left[\sum_{i=1}^m t_i+t_i+t_0(n-m)\right]^m(m-1) !} \int_0^{\infty} \exp (-y) \cdot y^{m-1} \mathrm{dy} \\ & =\frac{\left(\sum_{i=1}^m t_i+t_0(n-m)\right)^m}{\left[\sum_{i=1}^m t_i+t_i+t_0(n-m)\right]^m}\end{aligned}$        (47)

3.3 Failure censored sampling (type 3.ii censored data)

Let t1, t2, …, tn be the lifetime of a random sample of size n with probability function f(ti, θ). Take the exponential life-span distribution with a single parameter into consideration.

$f\left(t_i, \theta\right)=\frac{1}{\theta} \exp \left[\frac{-t_i}{\theta}\right]$,       (48)

and Jeffery prior is:

$g(\theta)=k \frac{\sqrt{n}}{\theta}$   (49)

 k is constant the joint probability density function $\mathrm{f}\left(\mathrm{t}_1, \mathrm{t}_2, \ldots ., \mathrm{t}_{\mathrm{n}}, \theta\right)$ is given by:

$\begin{aligned} & L\left(t_1, t_2, \ldots, t_n, \theta\right)=\frac{n !}{(n-r) !} \prod_{i=1}^r f\left(t_i, \theta\right)\left[s\left(t_r\right)\right]^{n-r} \\ & =\frac{n !}{(n-r) !}\left[\theta^{-r} \exp \left(\frac{-\sum_{i=1}^r t_i}{\theta}\right)\right]\left[\exp \left(\frac{-t_r}{\theta}\right)\right]^{n-r}\end{aligned}$        (50)

$\begin{aligned} & H\left(t_1, t_2, \ldots ., t_n, \theta\right)=\prod_{i=1}^n f\left(t_i, \theta\right) g(\theta) \\ & =L\left(t_1, t_2, \ldots, t_n, \theta\right) g(\theta) \\ & =\frac{n !}{(n-r) !}\left[\theta^{-r} \exp \left(\frac{-\sum_{i=1}^r t_i}{\theta}\right)\right] \cdot\left[\exp \left(\frac{-t_r}{\theta}\right)\right]^{n-r} \cdot k \frac{\sqrt{n}}{\theta} \\ & =\frac{k \sqrt{n} \cdot n !}{(n-r) !} \cdot \frac{\exp \left(\frac{-\sum_{i=1}^r t_i}{\theta}\right)\left[\exp \left(\frac{-t_r}{\theta}\right)\right]^{n-r}}{\theta^{r+1}}\end{aligned}$          (51)

the marginal probability density function of $\theta$ given the data $\left(t_1, t_2, \ldots, t_n\right)$ is

$\begin{aligned} & P\left(t_1, t_2, \ldots, t_n\right)=\int H\left(t_1, t_2, \ldots, t_n, \theta\right) d \theta \\ & =\int_0^{\infty} \frac{k \sqrt{n} \cdot n !}{(n-r) !} \cdot \frac{\exp \left(\frac{-\sum_{i=1}^r t_i}{\theta}\right)\left[\exp \left(\frac{-t_r}{\theta}\right)\right]^{n-r}}{\theta^{r+1}} d \theta \\ & =\frac{k \sqrt{n} \cdot n !}{(n-r) !} \int_0^{\infty} \frac{\exp \left(\frac{-\sum_{i=1}^r t_i}{\theta}\right)\left[\exp \left(\frac{-t_r}{\theta}\right)\right]^{n-r}}{\theta^{r+1}} d \theta \\ & =\frac{k \sqrt{n} \cdot n !}{(n-r) !} \int_0^{\infty} \exp \left[-\left(\frac{\sum_{i=1}^r t_i+t_r(n-r)}{\theta}\right)\right] \theta^{-(r+1)} d \theta \\ & =\frac{k \sqrt{n} \cdot n !}{(n-r) !} \int_0^{\infty} \exp (-y)\left(\frac{\sum_{i=1}^r t_i+t_r(n-r)}{y}\right)^{-(r+1)} \\ & \left(\frac{\sum_{i=1}^r t_i+t_r(n-r)}{y^2}\right) d y=\frac{k \sqrt{n} \cdot n ! \cdot(r-1) !}{\left(\sum_{i=1}^r t_i+t_r(n-r)\right)^r \cdot(n-r) !}\end{aligned}$        (52)

a distribution of it and only if certain conditions hold (t1, t2, …, tn) is given by:

$\begin{aligned} & \prod\left(\theta \mid t_1, t_2, \ldots, t_n\right) =\frac{\left(\sum_{i=1}^r t_i+t_r(n-r)\right)^r \cdot \exp \left(\frac{-\sum_{i=1}^r t_i}{\theta}\right)\left[\exp \left(\frac{-t_r}{\theta}\right)\right]^{n-r}}{\theta^{r+1} \cdot(r-1) !} \\ & \end{aligned}$      (53)

By using squared error loss function $\ell(\theta-\theta)=c(\theta-\theta)^2$, we can obtain the Risk function, such that:

$\begin{aligned} R(\theta-\theta) & =\int_0^{\infty} \ell(\theta-\theta) \prod\left(\theta \mid t_1, t_2, \ldots, t_n\right) d \theta \\ = & \int_0^{\infty}\left(c \widehat{\theta^2}-2 c \hat{\theta} \theta+I(\theta) \frac{\left(\sum_{i=1}^r t_i+t_r(n-r)\right)^r}{(r-1) !}\right. \\ & \cdot \frac{\exp \left(\frac{-\sum_{i=1}^r t_i}{\theta}\right)\left[\exp \left(\frac{-t_r}{\theta}\right)\right]^{n-r}}{\theta^{r+1}} d \theta \\ = & c \widehat{\theta^2}-2 c \hat{\theta} \int_0^{\infty} \frac{\left(\sum_{i=1}^r t_i+t_r(n-r)\right)^r}{(r-1) !} \theta^{-r} \\ & \exp \left(\frac{-\sum_{i=1}^r t_i}{\theta}\right)\left[\exp \left(\frac{-t_r}{\theta}\right)\right]^{n-r} d \theta+0\end{aligned}$          (54)

$\begin{gathered}\frac{\partial R(\hat{\theta}, \theta)}{\partial \hat{\theta}}=2 c \hat{\theta}-2 c \frac{\left(\sum_{i=1}^r t_i+t_r(n-r)\right)^r}{(r-1) !} \\ \int_0^{\infty} \theta^{-r} \exp \left(\frac{-\sum_{i=1}^r t_i}{\theta}\right)\left[\exp \left(\frac{-t_r}{\theta}\right)\right]^{n-r} d \theta, \frac{\partial R(\hat{\theta}, \theta)}{\partial \hat{\theta}}=0\end{gathered}$       (55)

$\begin{aligned} & f\left(t_i, \theta\right)=\frac{1}{\theta} \exp \left[\frac{-t_i}{\theta}\right] \hat{\theta}_B=\frac{\left(\sum_{i=1}^r t_i+t_r(n-r)\right)^r}{(r-1) !} \\ & \int_0^{\infty}\left(\frac{\sum_{i=1}^r t_i+t_r(n-r)}{y}\right)^{-r} \exp (-y)\left(\frac{\sum_{i=1}^r t_i+t_r(n-r)}{y^2}\right) d y \\ & \quad=\frac{\left(\sum_{i=1}^r t_i+t_r(n-r)\right)^r}{(r-1) !} \int_0^{\infty} \frac{\left(\sum_{i=1}^r t_i+t_r(n-r)\right)^{-r+1}}{y^{-r+2}}= \\ & \exp (-y) d y \frac{\sum_{i=1}^r t_i+t_r(n-r)}{(r-1)}\end{aligned}$         (56)

$\begin{gathered}\hat{S}_B(t)=\int_0^{\infty} \exp \left(\frac{-t_i}{\theta}\right) \prod\left(\theta \mid t_1, t_2, \ldots ., t_n\right) d \theta \\ =\int_0^{\infty} \exp \left(\frac{-t_i}{\theta}\right) \\ \frac{\left(\sum_{i=1}^r t_i+t_r(n-r)\right)^r \cdot \exp \left(\frac{-\sum_{i=1}^r t_i}{\theta}\right)\left[\exp \left(\frac{-t_r}{\theta}\right)\right]^{n-r}}{\theta^{r+1} \cdot(r-1) !} d \theta \\ =\frac{\left(\sum_{i=1}^r t_i+t_r(n-r)\right)^r}{(r-1) !} \\ \int_0^{\infty} \frac{\exp \left(\frac{-t_i}{\theta}\right) \exp \left(\frac{-\sum_{i=1}^r t_i}{\theta}\right)\left[\exp \left(\frac{-t_r}{\theta}\right)\right]^{n-r}}{\theta^{r+1}} d \theta \\ =\frac{\left(\sum_{i=1}^r t_i+t_r(n-r)\right)^r}{(r-1) !} \int_0^{\infty} \exp \left[\frac{-\left(t_i+\sum_{i=1}^r t_i+t_r(n-r)\right)}{\theta}\right] d \theta\end{gathered}$       (57a)

$\begin{aligned} & \hat{S}_B(t)=\frac{\left(\sum_{i=1}^r t_i+t_r(n-r)\right)^r}{(r-1) !} \\ & \left.\int_0^{\infty}\left(\frac{\sum_{i=1}^r t_i+t_i+t_r(n-r)}{y}\right)^{-(r+1)} \exp (-y)\right) \\ & \quad\left(\frac{\sum_{i=1}^r t_i+t_i+t_r(n-r)}{y^2}\right) d y \\ & =\frac{\left(\sum_{i=1}^r t_i+t_r(n-r)\right)^r}{\left[\sum_{i=1}^r t_i+t_i+t_r(n-r)\right]^r(r-1) !} \int_0^{\infty} \frac{\exp (-y) \cdot y^{r-1} d y}{\frac{\left.\sum_{i=1}^r t_i+t_r(n-r)\right)^r}{\left[\sum_{i=1}^r t_i+t_i+t_r(n-r)\right]^r}}\end{aligned}$       (57b)

4. The Model and the Results

For this modeling experiment, we have chosen n=25,50,75,100, several values of parameter θ= 0.4, 0.8, 1.2, 1.6, value of m=20 (items failed before t0) and value of times of items failed before t0 (10). The number of replicates, R=1000, was employed in this study. Matlab was used to create the simulation software. After the parameter was estimated, the approaches were compared using the MSE and MPE, or mean square error and mean percentage error.

${M S E}^{(\hat{\theta})}=\frac{\sum_{i=1}^{1000}(\hat{\theta}-\theta)^2}{R}$    (58)

and MPE

${M P E}^{(\hat{\theta})}=\frac{\left[\sum_{i=1}^{1000} \frac{\left|\hat{\theta}_i-\theta\right|}{\theta}\right]}{R}$      (59)

Table 1. MSE for parameters with m=20 and t0=10

n

b

$\widehat{\theta_1}$

$\widehat{\theta_2}$

$\widehat{\theta_3}$

$\widehat{\theta_4}$

25

0.4

0.8

1.2

1.6

3.7127e-06

4.8835e-08

3.9381e-05

1.2222e-05

2.1906e-06

1.6494e-06

6.5903e-05

3.3061e-05

0.1119

0.4785

1.3149

1.9065

0.1147

0.4843

1.3244

1.9180

50

0.4

0.8

1.2

1.6

1.1353e-06

3.3962e-06

1.0596e-05

2.9898e-05

1.8101e-06

4.2621e-06

6.4878e-06

4.0372e-05

0.0070

0.0068

0.0071

0.0021

0.0074

0.0072

0.0077

0.0022

100

0.4

0.8

1.2

1.6

7.7884e-07

1.1824e-05

9.0851e-06

7.3198e-05

1.0388e-06

1.3905e-05

7.0824e-06

6.6112e-05

0.2455

0.2713

0.2743

0.2797

0.2586

0.2861

0.2896

0.2956

Table 2. MPE for parameters with m=20 and t0=10

n

b

$\widehat{\theta_1}$

$\widehat{\theta_2}$

$\widehat{\theta_3}$

$\widehat{\theta_4}$

25

0.4

0.8

1.2

1.6

1.5233e-04

8.7353e-06

1.6537e-04

6.9096e-05

1.1701e-04

5.0766e-05

2.1393e-04

1.3624e-04

0.0264

0.0273

0.0302

0.0273

0.0268

0.0275

0.0303

0.0274

50

0.4

0.8

1.2

1.6

8.4236e-05

9.9922e-05

8.5780e-05

1.4409e-04

1.0636e-04

8.1602e-05

6.7123e-05

1.6744e-04

0.0066

0.0033

0.0022

0.0012

0.0068

0.0034

0.0023

0.0012

100

0.4

0.8

1.2

1.6

6.9769e-05

1.3592e-04

7.9430e-05

1.6909e-04

8.0575e-05

1.4740e-04

7.0131e-05

1.6070e-04

0.0392

0.0206

0.0138

0.0105

0.0402

0.0211

0.0142

0.0107

Table 3. MSE for exponential survival function with m=20 and t0=10

n

b

$\widehat{\theta_1}$

$\widehat{\theta_2}$

$\widehat{\theta_3}$

$\widehat{\theta_4}$

25

0.4

0.8

1.2

1.6

0.0032

0.0033

0.0032

0.0030

0.0032

0.0033

0.0033

0.0030

0.2456

0.2103

0.1944

0.1736

0.3436

0.3538

0.3410

0.3490

50

0.4

0.8

1.2

1.6

0.0015

0.0015

0.0016

0.0015

0.0015

0.0015

0.0015

0.0015

0.2540

0.2068

0.1812

0.1696

0.03504

0.3452

0.3404

0.3245

100

0.4

0.8

1.2

1.6

0.000076

0.000074

0.000078

0.000072

0.000075

0.000073

0.000077

0.000071

0.1963

0.1770

0.1626

0.1531

0.3304

0.3389

0.3292

0.3402

Table 4. MPE for exponential survival function with m=20 and t0=10

N

b

$\widehat{\theta_1}$

$\widehat{\theta_2}$

$\widehat{\theta_3}$

$\widehat{\theta_4}$

25

0.4

0.8

1.2

1.6

0.1907

0.1927

0.1802

0.1755

0.2168

0.2220

0.2032

0.1980

8.2428

6.1735

5.1066

7.5035

9.5335

8.3127

6.9212

9.7509

50

0.4

0.8

1.2

1.6

0.1200

0.1244

0.1214

0.1201

0.1283

0.1342

0.1297

0.1282

4.9120

6.0458

7.5696

8.1162

5.6026

8.5352

11.8876

11.0632

100

0.4

0.8

1.2

1.6

0.0834

0.0839

0.0836

0.0807

0.0863

0.0873

0.0865

0.0836

5.3700

6.1098

7.7303

9.9799

6.9112

11.1770

8.4347

9.7805

Tables 1, 2, 3, and 4 provide a tabular summary of the study's simulation findings, we note that the smallest values of (MSE) and (MPE) are in the Bayesian estimator and the maximum likelihood estimator if we use complete data. In the case of using censored data, we noticed that the values of (MSE) and (MPE) are somewhat large compared to the values of other estimators. The ordering of the estimators with respect to MSE.

5. Discussion and Conclusions

In Table 1 When comparing MSE for parameters with m=20 and t0=10 we found When we had a sample size of 25 and when comparing Bayes Estimator and the Maximum Likelihood Estimators we found that the maximum likelihood estimator is the best because its value MSE was less while when we took 50 we found the maximum likelihood estimator is the best because its value MSE was less and at a sample size of 100 we found that the maximum likelihood estimator is also the best because its value MSE is lower In the case of complete data and censored data.

In Table 2 When comparing MPE for parameters with m=20 and t0=10 we found When we have a sample size of 25, 50 and 100 when comparing Bayes Estimator and the Maximum Likelihood Estimators, we found that Bayes is the best because the value MPE is lower in the case of complete data, but at the size of 25, 50 and 100, we found that the maximum likelihood estimator is the best and the value MPE is lower in the case of censored data.

In Table 3 When comparing  MSE for exponential survival function with m=20 and t0=10 we found  When we have a sample size of 25 and when comparing Bayes Estimator and the Maximum Likelihood Estimators we found that the Maximum Likelihood Estimators is preferable because its value MSE was lower in the case of complete data and censored data, while at a sample size of 50 and 100 we found the Bayes estimator is the best because it was a lower value MSE in the case of complete data and the Maximum When dealing with censored data, the likelihood estimator is preferable since its MSE value is less.

In Table 4 When comparing MPE for exponential survival function with m=20 and t0=10 we found When we have a sample size of 25, 50 and 100, and when comparing Bayes Estimator and the Maximum Likelihood Estimators, we found that the Maximum Likelihood estimator is preferable because the value MPE is lower in the case of complete data and censored data.

At last, the maximum likelihood estimator is preferable when compared with the Bayes estimator in the case of complete data and censored data, through the use of values MSE and MPE.

Acknowledgment

To my boss, Dr. Hadeel, who made this project a reality, I am eternally grateful. I followed his instructions and assistance throughout the whole process of creating my paper.

  References

[1] Al-Kutubi, H.S. (2005). On comparison estimation procedures for parameter and survival function exponential distribution using simulation. Ph.D. Thesis, College of Ibn Al-Hatham, Baghdad University, Iraq.

[2] Al-Kutubi. H.S. (2005). On comparison estimation procedures for parameter and survival function. Iraqi Journal of Statistical Science, 9: 1-14.

[3] Al-Kutubi, H.S., Ibrahim, N.A. (2009). Bayes estimator for exponential distribution with extension of Jeffery prior information. Malaysian Journal of Mathematical Sciences 3(2): 297-313.

[4] Ahmed, A.O.M., Al-Kutubi, H.S., Ibrahim, N.A. (2010). Comparison of the Bayesian and maximum likelihood estimation for Weibull distribution. Journal of mathematics and statistics, 6(2): 100-104.

[5] Epstein, B., Sobcl, M. (1984). Some theorems to life resting from an exponential distribution. Annals of Mathematical Statistics, 25(3): 373-381. https://doi.org/10.1214/aoms/1177728793

[6] Flygare, M.E., Austin, J.A., Buekwalter, R.M. (1985). Maximum likelihood estimation for the 2-parameter weibull distribution based on interval- data. IEEE, R-34(1): 57-59. https://doi.org/10.1109/TR.1985.5221930

[7] Elfessi, A., Reineke, D.M. (2001). Bayesian look at classical estimation: the exponential distribution. Journal of Statistics Education, 9(1). https://doi.org/10.1080/10691898.2001.11910648

[8] Elviana, E., Purwadi, J. (2020). Parameters estimation of Rayleigh distribution in survival analysis on type II censored data using the Bayesian method. Journal of Physics: Conference Series, 1503(1): 012004. https://doi.org/10.1088/1742-6596/1503/1/012004

[9] Kumar, A. (2003). Bayes estimator for one parameter exponential distribution under multiply type II censoring scheme. International Conference on Statistics, Combinatorics and Related Area, University of Southern Maine, Portland, ME, USA.

[10] Pradhan, B., Kundu, D. (2014). Analysis of interval-censored data with Weibull lifetime distribution. Sankhya B, 76(1): 120-139. https://doi.org/10.1007/s13571-013-0076-1

[11] Wang, J.T. (2011). Estimation of lifetime distribution with missing censoring. Journal of Data Science, 9(3): 331-343.

[12] Mohammed, M.E., Al-Tebawy, A.A.A. (2021). Bayesian estimation of the beta distribution parameter (α) when the parameter (β) is known. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 12(14): 4879-4886.

[13] Naji, L.F., Rasheed, H.A. (2019). Bayesian estimation for two parameters of Gamma distribution under precautionary loss function. Ibn AL-Haitham Journal for Pure and Applied Sciences, 32(1): 187-196. https://doi.org/10.30526/32.1.1914

[14] Singh, U., Singh, S.K., Yadav, A.S. (2015). Bayesian estimation for exponentiated gamma distribution under progressive type-II censoring using different approximation techniques. Journal of Data Science, 13(3): 551-567. https://doi.org/10.6339/JDS.201507_13(3).0008

[15] Guure, C.B., Ibrahim, N.A. (2012). Bayesian analysis of the survival function and failure rate of Weibull distribution with censored data. Mathematical Problems in Engineering, 2012: 329489. https://doi.org/10.1155/2012/329489