Reducing Test Time for Selective Populations in Semiconductor Manufacturing

Reducing Test Time for Selective Populations in Semiconductor Manufacturing

D. Park M. Schuldenfrei G. Levy 

Optimal+ Inc. Israel

| |
| | Citation



As the semiconductor industry prepares for the Internet of Things, one of the major challenges it will face is to maintain quality levels as the volume of devices continues to grow. Semiconductor devices are moving from items of convenience (PCs) to necessity (smartphones) to mission-critical (autonomous automobiles). One aspect of manufacturing operations that can, and must change, in the face of ever-tightening quality requirements is how to test the devices that are shipped into the end market more efficiently while maintaining very high levels of quality. One of the ways to achieve these diametrically opposed goals is through the use of Big Data analytics. Semiconductor manufacturing test today is a ‘one size fits all’ process, with every device being made to go through the same battery of tests. Devices that initially do not pass are retested to be sure they are not bad, but what about the devices that are ‘exceptionally good’? Testing devices that are so ‘tight’ in their tolerances that statistically they will easily pass any remaining test intended to catch marginal devices is a waste of time and manufacturing resources. Using Big Data analytics within a manufacturing environment can enable companies to establish a ‘Quality Index’ where every individual device can be ‘scored’ independently. If that device achieves a high-enough quality score, it can be ‘excused’ from any further testing to accelerate overall manufacturing throughput with zero impact on quality. This paper will show how semiconductor companies today are putting Big Data solutions in place to improve overall product quality and simultaneously reducing their manufacturing costs by using data they already have in their possession.


automobiles, manufacturing, quality, semiconductor, test.


[1] Nishi, Y. & Doering, R., Handbook of Semiconductor Manufacturing Technology, CRC Press: Boca Raton, London and New York, pp. 33-3

[2] Laney, D., 3D Data Management: Controlling Data Volume, Variety and Velocity, MetaGroup Research, 2001.

[3] Hyman, P.B., IC Makers Face Hurdles in Auto Infotainment, available at http://electronics360.

[4] Tobias, P. & Trindade, D., Applied Reliability, Section 2 – Bathtub Curve for Failure Rates, Chapman & Hall, pp. 36–37, 1995.

[5] JESD22-A108C, Temperature, Bias, and Operating Life, JEDEC Standards, available at

[6] Nahar, A. & Daasch, R., Burn-in reduction using principal component analysis. Intl. Test Conf, 2005, paper 7.2.

[7] Rencher, A., Methods of Multivariate Analysis, 2nd edn., WILEY-INTERSCIENCE, Section 

4.5.2 – Outliers in Multivariate Samples, New York, pp.101–105, 2002.

[8] Automotive Electronics Council – Q001 Rev. D, Guidelines for PAT Average Testing.

[9] Madge, R., Rehani, M., Cota, K. & Daasch, W.R., Statistical Post-Processing at Wafersort – An Alternative to Burn-in and a Manufacturable Solution to Test Limit Setting for Sub-Micron Technologies, VLSI Test Symposium, pp. 69–74, 2002.

[10] Patent US7340359, Augmenting semiconductor’s devices quality and reliability.

[11] Chapman, P., Clinton, J., Kerber, R., Khabaza, T., Reinartz, T., Shearer, C. & Wirth, R., Cross Industry Standard Process for Data Mining Methodology, CRISP-DM 1.0 Step-by-step data mining guides.

[12] Daniel, W.W., Spearman rank correlation coefficient. Applied Nonparametric Statistics, 2nd edn., PWS-Kent: Boston, pp. 358–365.

[13] Timm, N., Applied Multivariate Analysis, Section 8 – Principal Component, Canonical  Correlation, and Exploratory Factor Analysis, Springer, pp. 445–476.

[14] O’Neill, P., Production multivariate outlier detection using principal components. Intl. Test Conf, 2008.