An Effective Optimization of Time and Cost Estimation for Prefabrication Construction Management Using Artificial Neural Networks

An Effective Optimization of Time and Cost Estimation for Prefabrication Construction Management Using Artificial Neural Networks

Ratna Kumari Challa Kanusu Srinivasa Rao 

Department of Computer science and Engineering, RGUKT-AP, IIIT - RK Valley, Kadapa 516330, Andhrapradesh, India

Department of Computer Science and Technology, Yogi Vemana University, Kadapa 516005, Andhrapradesh, India

Corresponding Author Email: 
kanususrinivas@gmail.com
Page: 
115-123
|
DOI: 
https://doi.org/10.18280/ria.360113
Received: 
18 January 2021
|
Revised: 
17 January 2022
|
Accepted: 
25 January 2022
|
Available online: 
28 Feburary 2022
| Citation

© 2022 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

The success of every construction business relies on the projects performed in a given period and at the negotiated rate. The construction business includes prefabrication firms, logistics companies, design industries on site and so on. The manufacturing method includes the assembly of structural parts at a development plant and the transport of them onto the building site as finished or semi-assembled components. For optimization, artificial neural networks (ANNs) are used because of their capacity to overcome qualitative and quantitative difficulties in the building industry. An ANN is used to execute the input, hidden, and output layers depending on the weight of the hidden layer. Different modeling strategies maximize the layers. ANN covers a wide variety of issues in construction management, for instance cost analysis, decision making, prediction of the mark-up percentage and the production rate in the construction industry. The main advantage of prefabricated methodology is that the procedure is easily done. The other real benefit of the prefabrication process is its integrated versatility. The present study underlines that the total project period and cost are the key considerations in the current job procurement phase in constructing prefabrication. The results of the proposed model are based on ANN algorithms which mainly achieves the perfect weight values in time and cost estimates.

Keywords: 

artificial neural network, perceptron, construction, prefabrication, optimization

1. Introduction

A country's economy is controlled by several aspects, i.e. population size, manufacturing practices, agriculture, government policy, people's culture, schooling, infrastructure, etc. In order to fulfill basic needs including food, shelter and clothes, citizens participate in different activities including agriculture, housing and the textile industry. Each of the industries listed above is assisted by numerous other industries. For example, machine tools, agriculture and chemical industries help a textile industry. A substantial part of the commodity in each sector is used by the public and a proportion is fed to other companies as raw materials or machinery. In addition to these manufacturing practices, educational establishments are involved with and preparing appropriate workers for these sectors. In addition to these operations, there are various service organisations, including hospitals, travel, postal facilities, legal services, insurance agencies, financial firms, etc.

Optimisation, including the optimization or minimisation of a real function, is accomplished by manually extracting the input values from within the permissible range and determining the function value. The widespread optimization theory and techniques are a vast field of applied mathematics among several formulations. In general, optimization seeks the best available values for certain objective functions within a certain range of specified domains or restrictions. Optimization often requires various forms of objective features and different domain forms. The emphasis of this analysis is on optimization strategies to reduce prefabricated buildings time and cost.

Based on existing literature, the present state of the building sector and the usage of prefabricated technologies was analysed. The latest developments in housing design and prefabrication technologies utilized by construction firms are discussed in this report. The appraisal practices for construction management are often examined and reflected on. A favourable construction management approach was then suggested. Prefabrication construction is expected by the design sector and steps are often implemented to boost the adverse condition. The present study shows why Indian construction industry is lagging behind the developed countries in the usage of state-of-the-art information technology and offers guidance for enhancing performance. The selection of the most beneficial construction equipment is important for savings by minimizing costs and simplifying and encouraging construction. There are suitable technical models that can be used for decision-making in the most favourable building process. The use and production of prefabrication technology in civil engineering has been enhanced dramatically in developed countries relative to the last decades, but it cannot yet be seen as adequate [1].

With its excellent production methods for the efficiency limits of the prefabricated building phase ANN would in future create additional amazing advance methodologies for achieving error reductions. The policies and guidance of both the central governments and the state governments promote the inclusion, alignment and regulation of all the nation's activities in order to optimise the nation's progress. It is also evident that any organisation of the country is required to play a competitive position in order to increase its efficiency for their existence.

A scheme is time-bound and employs tremendous resources of individuals, materials and machinery. The currency is millions and trillions. Technical breakthroughs also affected the building industry tremendously. In multidimensional terms, high-rise buildings, industrial plants and infrastructural facilities have been built. The construction industry is a major contributor to a country's growth, both domestic and foreign. This sector produces more workers than any other industries. Due to political and economic pressures on the construction industry, such as privatization of state-owned corporations, the opening of the Indian market, vigorous reconstruction work after catastrophe, intensive road building, etc., Indian construction firms have experienced extreme shift for two decades. These turbulent developments have affected building industries in numerous ways. Many formerly powerful Indian firms have vanished from the sector, and others have turned their management and design processes into prefabricated framework technology. Prefabricated architecture involves producing construction components in a production facility, transporting assembled or semi-finished components to the building site and eventually installing such components in a building [2]. The prefabrication process requires the manufacture and transfer of precast parts to the building site [3].

1.1 Prefabrication need

The industrialization of the building sector means a revolution in the whole design phase. The desire for safer, quicker and commercially feasible construction has contributed to a large market footprint for the pre-casting technology. In order to avoid on-site adjustments, careful preparation, thorough design and scheduling of the project before building activities must be taken into account. In several countries and territories, prefabrication technology has typically been utilised when a certain amount of building materials are made in a regulated setting, transported to building sites and installed in houses. The prefabricated housing units have been improved in such a way that they cannot even be separated from those housed in traditional buildings. In challenging weather environments, prefabricated structures can be necessary to increase the building pace and also to reduce the waste of the construction material by using mass manufacturing under control over the material. This criterion has now been increased to install broad span beams and bridge decks with rapid time by reducing formworks, shuttering and job costs at building sites. Prefabricated buildings are used for places not appropriate for conventional construction techniques, such as the hilly area and where conventional building materials are not accessible readily. Structures which have been in operation on a recurring occasion may be uniform buildings such as mass accommodation, stores, shelters, bus stands, protective cabins, site offices, road crossings, tubular structures, concrete building blocks, etc. [4].

1.2 Prefabrication construction standards

A building consists of all the components of a traditional building that consists of floors, ceilings, walls and pillars, columns and steps, while the main distinction is that the components are created in a regulated setting and taken to the site for assembly, attachment and commissioning. The framework can contain limited quantities of part forms including pillars, columns, frames, roof slabs etc. Components may be used to execute such tasks such as load bearing and enclosure. The weight of the modules should be smaller and therefore quicker and simpler to erect, which will further minimize the injuries at the facility. The components should be ideal for mechanized production or, at any rate, for high mechanization in production. The framework must be constructed according to the structural characteristics such as span, crane weights, illumination, etc. [5].

Prefabricated parts like concrete panels or steel and glass panels need to be treated carefully. The corrosion tolerance of the joints of prefabricated parts must be taken into account in order to prevent the joint loss. The cost of shipping for voluminous prefabricated parts can be higher than for the products from which they are produced, and so often more suitable packaging must be given. Wide prefabricated parts involve heavy-duty cranes, precise measuring and placement [6].

2. Related Works

In papers [3, 7] suggested a generalised model combining genetic algorithms with artificial neural networks to boost industrial building projects in the field of modelling engineering efficiency assessment. Industrial building programmes have seen significant issues such as overruns and gaps in the timetable. Contingency costs are usually distributed between 10% and 15% of project costs without taking into consideration any past change order costs [8]. The tool estimates the contingency costs of any road construction operation protected by the contract by the use of an artificial neural network model focused on data from historical order shifts. The cost details of the change orders from contracts for road maintenance was checked with the tool [9].

The fusion of artificial intelligence methods and conventional methodsin [10] considered an adequate way to estimate costs for a building project, including the necessary amounts of construction materials. These figures will conveniently be related to cost details (the equivalent cost unit per of approximate quantity of building material) in order to measure cost [11]. This unit cost shall involve manufacture and shipping, erection, implementation, insurance, indirect site charges, regulation, and overhead and benefit. However, prefabrication technology in developed countries such as India has not achieved the degree of value that is warranted. In addition, the calculation of prefabrication building period and cost utilising artificial neural networks was not achieved. The aims of the study are therefore to estimate time and costs in prefabrication technology through optimization techniques. This research aims to fill this information gap by investigating with ANN whether the prefabrication material could substantially be associated with the time performance and cost efficiency achieved by the projects as a percentage of the final agreed amount. This information deficit inspired the study to precede it. In this segment, the research articles that have been discussed relating to the issues in the present field of analysis have led to the creation of GUI to estimate maximum time and cost for prefabricated construction using ANN integrated with PSO, ACO and GA [11, 12].

3. Methodology: Model for Time and Cost Optimization Using Artificial Neural Network

A powerful information processing framework, close to the biological neural network characteristics, is developed by the Artificial Neural Network (ANN). ANNs have a broad variety of highly integrated computing components, either nodes or units or neurons, typically operating in parallel and set up in standard architectures. Each neuron is linked by a link to the other neuron. Each relation link is connected to weights containing input signal information. The neuron net utilises this knowledge to solve a specific issue. Collective behaviour of an ANN is defined by the capacity to understand, remember and generalise training patterns or data identical to the human brain. It may model the original neuron networks as seen in the brain. The computing components for ANN are therefore referred to as neurons or artificial neurons. Figure 1 displays the representation of the statistical model of the artificial neuron [13].

Figure 1. Artificial neuron mathematical model

Neural networks that can extract importance from complicated or imprecise data can be used to delete patterns and to spot trends that are too difficult to be observed by people or other computer techniques. As an expert in a specific knowledge category, a qualified neural network may respond and evaluate the data. This specialist may be used to include forecasts and solutions to the issue posed in new circumstances of concern. Adaptive learning, self-organization, real time service and fault tolerance through redundant knowledge coding are also advantages of an ANN work. An ANN is a mathematical information processing model which, because of its specific characteristics, is useful and attractive to forecasting tasks: it is a self-adaptive system centred on details, able to learn from past experience; it can generalise the information obtained and accurately infer an unknown part; it has the capacity to estimate a continued feature to any required precision. An ANN is known to be a general simulation method for the estimation of nonlinear time series such as asset markets, foreign exchange rates, accident gravity and high precision traffic volume [14-16]. An ANN was also implemented to estimate the various facets of construction cost across different periods of the life cycle of a building. ANNs and regression approaches have successfully been used with several civil engineering challenges in recent years.

4. Important Features Included in ANN

It is well established that the human brain comprises of a vast number of neurons with multiple contacts, about 1011. Table 1 demonstrates terminology for the biological neuron and its artificial neuron equivalents. The ANN model is defined by three specific entities: the synaptic interconnections of the mode, their functional activation and the training or teaching rules implemented to change and modify the weights of the relation.

Table 1. Biological and artificial neuron terms

 

Parameter Terminology Variation

Component

Biological neuron

Artificial neuron

1

Cell

Neuron

2

Dendrites

Weights or interconnections

3

Soma

Net input

4

Axon

Output

4.1 Prefabrication need

 The ANNs was implemented as a computer device composed of a set of basic and intertwined computing components known as neurons. Connective weights between the neurons reflect the networks. These weights are the defining parameters of the neural network's nonlinear function. In general, weight is the power of the input and output neurons associated. A healthy weight is an excitatory synapse and an inhibitor synapse is a harmful weight. To measure the output, the activation function is added. The weights provide details about the input signal. The Internet uses this knowledge to solve a query. The weight can be seen in matrix terms. Often called the relation matrix is the weight matrix. If the weight matrix includes all the adaptive components of an ANN, so it can decide the collection of all weight matrices for the ANNs for all conceivable information processing configurations. The ANNs can be accomplished with a fitting matrix. Thus, weights encode long-term memory and neuron activity states encode short-term memory in the neural network. Training or studying, which relies as such consistent training habits as possible, is a method for deciding these weights. The ANNs are able to generalise themselves from the data entries they are educated on.

4.2 Training

Like humans, the Artificial Neural Networks learn from experience. For a special use an ANN is configured, for example for pattern recognition or data classification through studying or practising. Learning or practising is a mechanism by which the neural network may respond to a stimulus by changing the parameters to the appropriate answer. In general, there are two forms of learning in ANNs and they are learning parameters and structure learning. The learning parameter updates the weights in a neural net. Structure learning focuses on improving the network structure and addresses the amount and communication forms of processing components. The learning parameter is included in this analysis, since it updates the binding weights in a neural network. In addition to these two academic groups, an ANN will usually be divided into three categories, for example controlled, unsupervised and enhanced academic. This analysis review utilizes controlled learning, as all these real-time events require supervised learning methodology. Learning or preparation is done here with the assistance of an instructor or customer. Each input vector needs a corresponding goal vector, which represents the desired performance for the regulated learning in ANNs. The input vector along with the goal vector is considered an exercise pair. The network here is told specifically about what should be emitted as production. Figure 2 illustrates the work phase of a supervised learning network in the block diagram. The input vector is introduced to the network during the testing and results in an output vector. The present output vector is this output vector

Figure 2. Supervised learning

Then, the real output vector is contrasted with the expected or goal output vector. If there is a disparity between the two output vectors, the network produces an error signal. This error signal is used for weight correction before the real performance meets the expected output or goal. This style of instruction requires a supervisor or instructor to mitigate a mistake. The network that has been educated using this approach is also said to use controlled teaching technique. In supervised learning, the right goal output values for each input pattern are presumed.

4.3 Neuron

For their structures in layers, the neurons can be visualized. An ANN consists of a series of strongly interlinked processing components (neurons), in order for each contribution of the processing component to be bound by weight to or to itself with the other processing elements. Therefore, the design and geometry of these processing components is important for an ANN. The link point should be noted and the role of a processing unit should be defined in an ANN. Network architectures are called the arrangement of neurons for layers and the patterns of connexons formed inside and between layers. There is one single layer feed-forward network, a multi-layer feed-forward network, a single layer recurring network, a multi-layer recurring network, a single layer recurring network and a single node with their feedback. This analysis analysis uses a multilayer feed-forward network, which is made up of the interconnecting of multiple layers, such as input, output and secret layers. The input layer absorbs the input and that layer does not have any feature other than buffering the input signal. The output layer produces the network output. Any layer formed between the model. Neural network input / output layers are referred to as the secret layer. Figure 3 displays a three-layer graphical layout. The secret layer is an internal component of the network that does not affect the external world explicitly. Note that zero to multiple secret layers can be identified in an ANN. The more complicated the network is the amount of secret layers. However, this may have an effective performance response. Any output from one layer is linked to every node of the next layer in the event of a completely connected network. It is stated that a network is the feed forward network, while no neuron is an entry to a node in the same layer or in the next layer in the output layer.

Figure 3. Three-layer neural network

4.4 ANN training rules work regulations

There are various guidelines for network preparation. Multi-layer perceptron (MLP) is one of the best known rules. MLP is a feedback network where knowledge travels from the input side to the output layer via the shielded layers. The literature provides extensive material. Many experiments have demonstrated that an MLP is universal. An MLP with 10 hidden layers can approximate any finite nonlinear feature with great precision. Each layer consists of neurons that are Processing Elements (PEs) of the network. Any neural cell (neuron) in every layer is connected by lines with coefficients, called weight coefficients, to the whole next layer of neurons. Any shift in coefficients affects the network 's role. In reality, the network training's main objective is to decide the best weight coefficients and achieve the optimal performance.

The neural network-based back propagation-algorithm Multi-Layer Perceptron (MLP) consists of a group of nodes in multiple layers that are related by weight to only the nodes in the neighbouring layer. The information is seen in the input layer as the input vector and the output vector processes the data obtained in the output layer. The input and output of the node in the secret layer of the Neural MLP Network is given in Equations 1 and 2 according to the back propagation algorithm.

$n e t_{i}=\sum W_{i j} * X_{i}$              (1)

out $_{i}=F\left(\right.$ net $\left._{i}\right)$         (2)

where, outi is the neuron output, wij is the weight coefficient of the first layer to the second layer's neuron and xi is the ‘i-th’ entry, f is the activation function.

In training model, MLP inserts the initial layer multiplied by weight coefficients which would be some randomly chosen number. Any neuronal activity in two ways is known as neti, for example, when measuring the sum of the input, and injecting it into an activity is called "activation function".

4.5 Activation function

The activation function is implemented to measure the performance of an ANN from the net input. Any forces or activations can be provided to make work more effective and to produce an exact performance. These activations lead to the same production. The production of knowledge for a processing element can be seen as comprising of two key sections such as input and output. The input of a processing element is correlated with the integration function (f). This role acts as the feedback for the processing element to combine the activation, data or proof of an external source or other processing element into the net. The nonlinear activation mechanism guarantees that the reaction of a neuron is bound. Certain nonlinear characteristics are used to obtain multi-layer network gains from a single-layer network. When the signal is supplied into a multi-layer network with linear activation functions, the response is the same as the one-layered network output. This is why in multilayer networks nonlinear functions are commonly utilised relative to linear functions. The functions for activation used throughout the analysis are classified as "hyperbolic tangent" and "sigmoid groups." This mechanism begins with the remaining neurons in the middle layer and the outputs are eventually generated in the last layer. The schema of the activation feature used in this analysis as seen in Figure 4.

Figure 4. Feature activation

In order to evaluate the relationship between the contingent and the independent variables, the ANNs, a large information processing device focused on the structure of the human brains, will approximate all finite, nonlinear models. Indeed, the mechanism is centred on error back propagation, which eliminates the error between network output and the intended output or goal. During learning, an error is determined between the network output and the aim output and sent back from the last layer to the last. The weight coefficients are then corrected with Eqns. (3) and (4). This method is referred to as "error-back propagation." Again, with the new weight coefficients, the network produces output. It also measures the error elimination and extends out through the network before the error hits the lowest value over several ages.

$\mathrm{W}_{\mathrm{ij}}(\mathrm{t}+1)=\mathrm{W}_{\mathrm{ij}}(\mathrm{t})+\eta \delta_{\mathrm{pi}} \cdot \mathrm{O}_{\mathrm{pj}}$        (3)

$\delta \mathrm{piw}_{\mathrm{ij}}(\mathrm{t}+1)=$$\mathrm{W}_{\mathrm{ij}}(\mathrm{t})+\eta \delta \mathrm{piO}_{\mathrm{pj}}+\alpha\left[\mathrm{W}_{\mathrm{ij}}(\mathrm{t})-\mathrm{W}_{\mathrm{ij}}(\mathrm{t}-1)\right]$        (4)

where, Wij(t+1) in step t+1 is the weight coefficient, from neuron i to j; Wij(t) is a weight coefficient in step t, from neuron i to the neuron j; η is the learning coefficient; δpi is the discrepancy between the target output and the output of the network in neuron p on a layer j.

5. Back Propagation Network

Training involves measures such as evaluate input data performance, equate the estimated performance of each pattern with the goal output of that pattern and change the weights of each neuron to minimize the discrepancy between the amounts that have been tested and estimated, such as measuring the error by repelling an error feature backwards via the neural network. The back-propagation algorithm is used for this training method. One of the most significant improvement in the neural network is the back-propagation learning algorithm. The research and engineering group has been re-energized by this Network to model and process various quantitative phenomena using neural networks. This learning algorithm is applicable to multi-layer feedback networks that consist of processing elements with continuous differentiable functions. The networks connected to the context learning algorithm are often called context propagation networks. This algorithm offers a way to adjust the weights inside a back-propagation network for a specified collection of training input-output pairs to better identify the given input patterns. The basic principle of this algorithm for weight upgrades is essentially the gradient decreasing approach used for simple perception networks of various units. This is a way to spread the error back through the secret machine. The goal of the neural network is to form the network such that it can match the capacity of the net to respond through memorization and its capacity to provide rational responses to inputs close but not equivalent to those that are used through generalisation in training.

The back-propagation algorithm varies from other networks in terms of the mechanism by which weights are determined during the network learning or training phase. The general issue with the multi-component perceptions is to efficiently measure the weights of the covered layers that will contribute to a very slight or zero performance mistake. The network teaching gets more complicated as the secret layers are expanded. The mistake must be determined in order to correct the weights. The error is quickly determined in the output layer, which is the gap between the real or estimated and the expected or target output. It should be remembered that there is no clear information of the mistake on the secret layers. Therefore, other methods for measuring an error on the secret layer can be utilised. This would reduce the production error to a minimal, and this is the overall objective of the current study. The backbone network is trained in three stages such as feedback of the input training pattern, error estimation and context propagation and weight changes. To measure the back-propagation network the feed-forward process is measured only. More than one hidden layer may be more helpful but one hidden layer is plenty. Even if the training is very sluggish, the network will achieve its performance very quickly once educated. The three-layer neural propagation network, which has proven valuable when modelling input-output interactions, are the most frequently utilised communication pattern. The number of neurons in the input and output layers correlates to the sum of input and output variables in the data set while the number of hidden layers or number of neurons for each hidden layer is specified without a particular law. The number of neurons must then be found experimentally in any hidden layer. A network of two secret layers or two sub-layers is usually enough to solve much of the complicated problems in civil engineering applications. A network can be programmed to replicate the desired input-output connection by adjusting weights between the neurons. The nonlinear transition between the input and the output takes place in a concealed layer by using a shift function or activation function, converting the weighted inputs. Linear, sigmoid, log-sigmoid, and tan-sigmoid are the most common activation functions. Neural networks are part of artificial intelligence and have been widely deployed in numerous fields. In view of this strong capability, researchers operated with more strength and accuracy on a new generation of ANNs.

5.1 Training back-propagation network algorithm

In the training algorithm the terminologies used are the following:

x = vector of input training (x1,….. xi ..., xn)

α = learning parameter

t = vector of output with target(t1, ..., tk, ......, tm)

xi = unit entry i

Because the input layer uses ID activation, here are the exact input and output signs.

w0j = preference for jth hidden unit w0k = preference for jth output unit, zj = hidden unit j.

The net input to zj is:

$\mathrm{Z}_{\mathrm{inj}}=v_{o j}+\sum_{i, j=1}^{n} x_{i} v_{i j}$         (5)

The net output to zj is:

$\mathrm{z}_{\text {outj }}=f\left(\mathrm{z}_{\mathrm{inj}}\right)$        (6)

The net input yk is represented as,

$y_{\mathrm{inp}-k}=\mathcal{w}_{o k}+\sum_{j=1}^{n} z_{j} \mathcal{w}_{i k}$        (7)

And the output is:

$y_{\text {outp-k }}=f\left(\mathrm{z}_{\text {inp-k }}\right)$        (8)

δk = weight change of the error correction for wjk due to an error in unit yk that is propagated to hidden units, feeding to unit ykδj = weight modification of the error correction for vij due to back-propagation of the error in the secret device zj. Furthermore, it should be remembered that binary sigmoidal and bipolar sigmoidal activation functions are widely utilized. Due to these three attributes such as consistency, differentiability and non-decreasing monotony, these functions are included in the back propagation network. The binary sigmoid spectrum is between 0 and 1 and between -1 and 1. The error back-propagation learning algorithm uses the gradual method for weight updates, which immediately adjusts weights when a training pattern is implemented. The following algorithm defines the error back-propagation learning algorithm:

Algorithm-1: Error back-propagation learning algorithm

Step 0: Weight initialization and learning rate initialization.

Step 1: Execute steps 2-9 while stop if incorrect.

Step 2: Execute 3-8 steps with each training pair. Feed forward [Step 1] Feed forward.

Step 3: The input signal xi is sent to the input device to the s hidden unit i =1 to -n.

Step 4: Each zj unit secret (j=1 to p) sums up the weighted input signals for calculating the net input: Calculate the performance of the secret device with its activation functions. Z in p-j (function of binary or bipolar sigmoidal activation):

Step 5: Measure the net input for each output unit yk (k=1 to m).

$y_{\text {inp-k }}=w_{o k}+\sum_{j=1}^{n} z_{j} w_{i k}$

And use the activation functionality to measure the performance signal,

Yk = f(yinpk) Error back-propagation [Step 2]

Step 6: Each yk output unit receives a target pattern matching the input training pattern and calculates the error correction term.

ðk = (tk − yk)

It will measure the f'’(yink) derivative from the activation function. Adjust the weight and bias adjustments on the basis of the measured error correction bias.

jk = αðk zj

∆woutputk = αðk

Also, give δk back to the secret layer.

Step 7: Per unit zj (j=1 to p) secret sums the input units of each unit.

The word δinpj is compounded by the f(zinpj) derivative to determine the error term.

ðj = ðinj f(zinj)

The f'’(zinj) derivative may be determined from activation function based on whether the sigmoidal function is binary or bipolar. Based on the measured ̈δj, weight and prejudice modifications are updated.

∆Vjk = αðk Xj

∆Voutputk = αðj

Updating weight and prejudice [Step 3].

Step 8: Each yk unit of performance (k=1 to m) updates bias and weight.

wjk (new) = wjk (old) + os ∆vojk

WOk (new) = wok (old) + ∆Wojk

Step 9: Every zj (j=1 to p) secret unit updates its predictions and weight.

vij(new) = vij (old) + ∆ vij

vOj(new) = vOj (old) +∆ voj

Step 10: Search to see if there is a pause. The stop condition could be some periods or whether the real performance compares to the goal performance.

The method for checking the back-propagation network is as follows:

Algorithm-2: Testing the back-propagation network algorithm

Step 0: Weight initialization. The weights are extracted from the algorithm for preparation.

Step 1: Perform steps 2-4 for each vector data.

Step 2: Set the input device activation for xi (i=1 to n).

Step 3: Measure the net input and output of the secret unit x. In j=1 to p,

$\mathrm{Z}_{\mathrm{inj}}=v_{o j}+\sum_{i, j=1}^{n} x_{i} v_{i j}$

$\mathrm{z}_{\text {outj }}=f\left(\mathrm{z}_{\mathrm{inj}}\right)$

Step 4: Measure the performance of the output layer device now. To k=1 to m,

$y_{\text {inp-k }}=w_{o k}+\sum_{j=1}^{n} z_{j} w_{i k}$

$y_{\text {outp-k }}=f\left(\mathrm{Z}_{\mathrm{inp}-k}\right)$

Step 5: To measure the performance, use the sigmoid activation features.

6. Results and Discussion

Table 2. Specific expected values and time efficiency optimization outcomes

Project

ACO

PROPOSED -

ANN

GA

PSO

1

0.313042

0.505

0.475717

0.505564

2

0.478197

0.5454

0.305421

0.544599

3

0.604278

0.6767

0.766731

0.666306

4

0.667241

0.6969

0.549067

0.699523

5

0.745651

0.707

0.707715

0.71204

6

0.728917

0.707

0.85332

0.709604

7

0.809717

0.707

0.793017

0.727379

8

0.880547

0.7575

0.57479

0.759841

9

0.808635

0.707

0.790475

0.73096

10

0.907531

0.7777

0.617851

0.820032

11

0.847141

0.8989

0.86018

0.89848

12

0.877086

0.8585

0.924343

0.838944

13

0.903589

0.8888

0.906384

0.836984

14

0.815247

0.7272

0.89099

0.730299

15

0.916166

0.909

0.85941

0.861455

16

0.915693

0.8686

0.862112

0.867932

17

0.837087

0.7575

0.944879

0.762516

18

0.868591

0.8888

0.907051

0.856908

19

0.901848

0.8686

0.886342

0.85664

20

0.870532

0.808

0.949476

0.793

21

0.854635

0.909

0.967301

0.877097

22

0.898594

0.808

0.7671

0.811039

23

0.909876

0.9595

0.941146

0.963207

24

0.901397

0.9595

0.935353

0.966835

25

1.103093

1.1009

1.009315

1.120884

26

0.820119

0.9898

1.000753

1.146215

27

0.971434

1.2625

1.046805

1.255585

28

1.54659

1.2726

1.261701

1.26737

29

0.936305

1.2019

0.989482

1.187547

30

1.151524

1.3029

1.055594

1.304202

In recent days, ANN's ability to address the quantitative and qualitative challenges in the building sector has become quite important. This study has been proposed to estimate the cost efficiency (CP) and time performance (TP) of the building phase by using ANN. Here numerous percentages of prefabrication contents are applied to the building process, and the inputs for the ANN process include projected period, actual length, estimated costs and actual costs. The ANN framework is used for preparation with the established evidence, and this phase is the preliminary stage of the method of prediction. The best value produced in the ANN structure optimizes the structure's weight. Different optimization methods are used to obtain optimum structure weight. If the outcomes obtained do not reach the desired standard, the training phase is replicated to adjust the structure to the acceptable level, such that the performance is anticipated. Once the error values between the performance of the real values and the expected values are near to zero, the built models are used to forecast the uncertain input values and maximize process time and cost. The effects of the work proposed are obtained by the MATLAB 2014a work framework, by i5 processors with 4 GB of RAM device setup and by ANN method estimation. The Graphical User Interface (GUI) is then generated by using the device configuration listed above to achieve the effects of the proposed ANN model.

Table 3. Specific expected values and different cost efficiency optimization outcomes

Project

ACO

PROPO

SED-ANN

GA

PSO

1

0.40329

0.4242

0.439311

0.434297

2

0.222353

0.4545

0.636551

0.474718

3

0.662562

0.606

0.687698

0.601693

4

1.236499

0.6565

1.134537

0.652673

5

1.103738

0.707

0.852331

0.683787

6

0.735487

0.7171

0.616707

0.685689

7

0.956325

0.7272

0.801118

0.71682

8

0.894406

0.7272

1.015598

0.744952

9

1.788844

0.7373

0.917525

0.706614

10

0.18126

0.7474

1.843239

0.755164

11

1.102273

0.7575

2.128976

0.766745

12

2.051273

0.7575

0.937217

0.787342

13

0.68549

0.7575

0.94318

0.817473

14

0.768466

0.7676

0.510128

0.770129

15

1.02624

0.7676

0.952393

0.830791

16

0.950289

0.7676

1.256867

0.788429

17

1.300512

0.7676

0.846606

0.760698

18

1.354074

0.7878

0.808261

0.81634

19

1.076136

0.7878

1.025673

0.8044

20

0.7208

0.7979

0.925393

0.822693

21

1.211155

0.9292

0.901924

0.96387

22

0.686386

0.909

0.963414

0.911874

23

0.956377

1.01

1.069805

0.982713

24

0.627517

0.9898

1.027065

0.977174

25

1.092363

1.0605

2.452037

0.967544

26

1.249404

1.111

0.915259

1.037313

27

1.595519

1.1615

0.955866

1.1432

28

0.839737

1.1918

2.824951

1.194755

29

0.082754

1.212

1.300737

1.075503

30

1.315306

1.212

1.471036

1.377558

Tables 2 and 3 demonstrate that the close values are often noticed in PSO strategies for the ANN for separate optimization effects and the real time and cost effects.

6.1 Matrix convergence

The convergence graph is drawn using the X-axis iteration and the Y-axis fitness. The convergence map is intended to demonstrate the iterative essence of time performance and cost output solutions during the training phase, which enables iterations to display virtually zero errors. Project 1 to 30 data obtained was used as a training dataset. Figure 5 displays the convergence map with the health values of prefabrication technology building efficiency parameters. The diagrams display the exercise diagram performance measurement criteria dependent on the repetition of the GA, PSO, and ACO by adjusting the weights between -500 and 500. For the proposed ANN model, error values are then calculated. Basically, the graph addresses the PSO mechanism with limited fitness and optimum iteration. The convergence map reveals that the minimal error is rendered in PSO and that it is 0.5397 in the 92nd iteration.

Figure 5. Graph of convergence

The suggested method's minimal error value is contrasted to the GA and ACO. The overall discrepancy between errors is 77.55% and 78.57% for ACO technology. At the initial iteration, the behaviour of the convergence chart indicates that, according to the objective function of the algorithm, the fitness is strong and the error values are steadily decreased or minimized. The graph thus demonstrates that the effects of the PSO methodology have the perfect exercise values.

6.2 Outcomes and expected model values

Neural network processes the outcomes of the experiments and the initial time and cost values of various prefabrication contents. Tables 2 and 3, respectively, demonstrate the real expected values and the various optimization strategies effects of time and cost efficiency.

7. Conclusions

Awareness of the unique requirements and parameters is important for success in overseeing a building project. An ANN model is used to form and execute the company's corporate plans. The key purpose of the research was to provide Interface with an optimal time and cost prediction model for expected technologies. The criteria to be satisfied have also been specified throughout this procedure. Since time and cost optimization is important, both the time and the overall costs of the project may be minimised as well as time and cost optimisation. The cost limits and process time frame of the construction process are based in prefabrication technology using ANN's PSO algorithm, which primarily obtains the perfect weight values in the model. The typical optimum findings are used to classify multivariable development problems requiring design variables to be chosen according to weights. During this process, the different prefabrication contents, cost and length are taken into consideration. With its excellent production methods for the efficiency limits of the prefabrication construction, the ANN will in future be developed as an incredible advanced approach to achieving reduced errors. Previous experience and findings with the resolution of resource allotment optimization problems have shown that modern approaches and models can be oriented to simple input parameters and output parameters regulation. In addition, the evolutionary optimization record is essential to consider the probability of cancellation or failure of the chosen software.

  References

[1] Naik, M.G., Radhika, V. (2015). Time and cost analysis for highway road construction project using artificial neural networks. Journal of Construction Engineering and Project Management, 5(1): 26-31. https://doi.org/10.6106/JCEPM.2015.5.1.026

[2] Moghadas, T.S., Tavakoli, M.G., Moghadas, T.S. (2007). Analysis of buried plastic pipes in reinforced sand under repeated-load using neural network and regression model. International Journal of Civil Engineering, 5(2): 118-133.

[3] Rezaie Moghaddam, F., Afandizadeh, S., Ziyadi, M. (2011). Prediction of accident severity using artificial neural networks. International Journal of Civil Engineering, 9(1): 41-48. 

[4] Günaydın, H.M., Doğan, S.Z. (2004). A neural network approach for early cost estimation of structural systems of buildings. International Journal of Project Management, 22(7): 595-602. https://doi.org/10.1016/j.ijproman.2004.04.002

[5] Schalkoff, R.J. (1997). Artificial Neural Networks. Mcgrawhill, New York, 146-188. 

[6] Hegazy, T., Ayed, A. (1998). Neural network model for parametric cost estimation of highway projects. Journal of Construction Engineering and Management, 124(3): 210-218. https://doi.org/10.1061/(ASCE)0733-9364(1998)124:3(210) 

[7] Sirisati, R.S. (2020). Machine learning based diagnosis of diabetic retinopathy using digital fundus images with CLAHE along FPGA Methodology. Int. J. Adv. Sci. Technol. (IJAST-2005-4238), 29(3): 9497-9508. 

[8] Kohzadi, N., Boyd, M.S., Kermanshahi, B., Kaastra, I. (1996). A comparison of artificial neural network and time series models for forecasting commodity prices. Neurocomputing, 10(2): 169-181. https://doi.org/10.1016/0925-2312(95)00020-8 

[9] ElSawy, I., Hosny, H., Razek, M.A. (2011). A neural network model for construction projects site overhead cost estimating in Egypt. International Journal of Computer Science Issues, 8(3): arXiv preprint arXiv:1106.1570.

[10] Swamy, S.R., Rao, P.S., Raju, J.V.N., Nagavamsi, M. (2019). Dimensionality reduction using machine learning and big data technologies. Int. J. Innov. Technol. Explor. Eng. (IJITEE), 9(2): 1740-1745. 

[11] Tu, K.J., Huang, Y.W. (2013). Predicting the operation and maintenance costs of condominium properties in the project planning phase: An artificial neural network approach. International Journal of Civil Engineering, 11(4): 242-250.

[12] Kennedy, J., Eberhart, R. (1995). Particle swarm optimization. In Proceedings of ICNN'95-International Conference on Neural Networks, 4: 1942-1948. https://doi.org/10.1109/ICNN.1995.488968

[13] Flood, I. (2008). Towards the next generation of artificial neural networks for civil engineering. Advanced Engineering Informatics, 22(1): 4-14. https://doi.org/10.1016/j.aei.2007.07.001 

[14] Hann, T.H., Steurer, E. (1996). Much ado about nothing? Exchange rate forecasting: Neural networks vs. linear models using monthly and weekly data. Neurocomputing, 10(4): 323-339. https://doi.org/10.1016/0925-2312(95)00137-9

[15] Taylor, J.G., Taylor, J.G. (1996). Neural Networks and Their Applications. New York, John Wiley & Sons.

[16] Zhang, G., Patuwo, B.E., Hu, M.Y. (1998). Forecasting with artificial neural networks: The state of the art. International Journal of Forecasting, 14(1): 35-62. https://doi.org/10.1016/S0169-2070(97)00044-7