A Multi-Layer Business Intelligence Framework for Real-Time KPI Monitoring in Distribution Operations

A Multi-Layer Business Intelligence Framework for Real-Time KPI Monitoring in Distribution Operations

Nabila Noor Qisthani* Miftahol Arifin Dimas Fanny Hebrasianto Permadi Syarif Hidayatuloh Ratih Windu Arini Yulinda Uswatun Kasanah

Logistics Engineering, Telkom University, Purwokerto 53147, Indonesia

Informatics Engineering, Telkom University, Purwokerto 53147, Indonesia

Corresponding Author Email: 
miftahola@telkomuniversity.ac.id
Page: 
233-252
|
DOI: 
https://doi.org/10.18280/jesa.590121
Received: 
17 November 2025
|
Revised: 
19 January 2026
|
Accepted: 
28 January 2026
|
Available online: 
31 January 2026
| Citation

© 2026 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Modern distribution operations generate heterogeneous, high-frequency operational data that requires continuous monitoring to ensure system stability and operational responsiveness. However, in many small and medium-sized enterprises, performance monitoring remains fragmented, relying heavily on spreadsheet-based reporting and visualization-centric dashboards that lack architectural mechanisms for consistent Key Performance Indicator (KPI) computation, near real-time tracking, and auditable control. This study proposes a multi-layer Business Intelligence (BI) framework for near real-time KPI monitoring in multi-divisional distribution environments. From a systems engineering perspective, the proposed architecture explicitly decomposes data ingestion, transformation, KPI logic execution, semantic aggregation, and human–system interaction into interoperable layers. A dedicated rule-based KPI processing layer is introduced to formalize performance computation and anomaly detection, thereby enabling reliable propagation of performance signals across operational subsystems. The system is developed following a Design Science Research methodology and evaluated through an industrial case study conducted in a building materials distribution firm. System effectiveness is assessed using a usability-driven evaluation framework that examines the quality of human–system interaction across multiple user roles. The results indicate that the proposed architecture improves monitoring consistency, reduces information latency, and enhances users’ ability to detect and interpret performance deviations in near real-time. This study contributes to the BI and operations management literature by conceptualizing KPI logic as a first-class architectural component within decision support systems and by presenting an implementable monitoring framework that is particularly relevant for distribution-intensive operations in developing economic contexts.

Keywords: 

decision support system, Key Performance Indicator, multi-layer architecture, distribution systems, Business Intelligence, human–system interaction

1. Introduction

In modern data-intensive industries, the ability to monitor Key Performance Indicators (KPIs) in real time has become a critical enabler of organizational responsiveness and competitive advantage [1]. In developing economies, particularly within the logistics and distribution sector, this challenge is exacerbated by limited IT infrastructure and a persistent reliance on manual and legacy information systems. For multi-divisional distribution firms, such as those operating in building materials, fast-moving consumer goods, and retail networks, the increasing complexity of sales operations, warehouse activities, logistics flows, and workforce management generates large volumes of fragmented operational data that must be consolidated into coherent, decision-ready insights.

Despite this growing complexity, many small and medium-sized enterprises (SMEs) in developing regions continue to rely heavily on spreadsheet-based KPI reporting [2, 3]. Such practices are prone to data delays, inconsistencies, versioning conflicts, and limited support for anomaly detection, ultimately weakening managerial situational awareness and cross-divisional coordination [4]. As a result, decision-making often remains reactive rather than anticipatory, particularly in operationally intensive environments.

Business Intelligence (BI) technologies are widely recognized for their potential to address these challenges by integrating heterogeneous data sources and transforming them into actionable insights for operations and strategy. Prior studies have demonstrated that BI dashboards can improve performance visibility, reporting speed, and managerial decision effectiveness [5]. However, much of the existing BI literature remains dominated by tool-centric or visualization-focused approaches. In many cases, dashboards are conceptualized as isolated front-end artifacts rather than as integral components of a coherent analytical architecture that explicitly supports continuous KPI computation, rule-based categorization, semantic aggregation, and multi-role decision support [6-8].

Furthermore, although real-time analytics has received increasing attention, relatively few studies propose BI frameworks specifically tailored to the structural characteristics of multi-divisional distribution firms. In such organizations, KPI structures vary substantially across sales, warehouse operations, logistics activities, and human resources. This condition necessitates not only standardized KPI definitions but also role-sensitive aggregation and anomaly detection mechanisms [9, 10]. Existing BI architectures rarely formalize these requirements, resulting in a persistent gap between high-level BI concepts and practical implementation in operational distribution contexts. Moreover, many prior studies emphasize high-cost predictive analytics, leaving limited guidance for cost-effective, rule-based BI frameworks suitable for SMEs operating under resource constraints [11].

Another dimension that remains underexplored in BI research is usability. While the technical sophistication of BI platforms has advanced considerably, system effectiveness ultimately depends on how well dashboards support intuitive interaction, clear information presentation, and efficient task execution. Human–Computer Interaction (HCI) methods, such as Nielsen’s heuristic evaluation and scenario-based usability testing, provide established techniques for assessing interactive systems. However, their systematic integration into BI research and evaluation remains limited, particularly in near real-time operational settings.

To address these gaps, this study proposes a multi-layer BI framework that integrates data ingestion, transformation, KPI logic, semantic aggregation, and role-based visualization into a unified architecture for near real-time KPI monitoring [5, 12, 13]. The framework is implemented and validated through an industrial case study involving a multi-divisional building materials distribution firm. This context is characterized by high data variability, fragmented reporting practices, and intensive coordination demands.

The objectives of this study are fourfold:

  1. To design a multi-layer BI architecture that formalizes data processing, KPI logic, and role-based visualization within a coherent analytical pipeline.
  2. To develop standardized KPI computation rules and semantic aggregation models tailored to multi-divisional distribution environments.
  3. To implement the proposed framework in a real-world industrial context using an iterative Design Science Research approach.
  4. To evaluate the framework using a usability-driven assessment that combines heuristic evaluation and scenario-based performance testing across multiple user roles.

This study makes three principal contributions. Theoretically, it conceptualizes KPI logic as a first-class architectural layer that extends BI architecture beyond traditional data warehousing and visualization-centric models. Methodologically, it demonstrates how HCI techniques can be systematically integrated into BI evaluation to assess the effectiveness of real-world decision support [14, 15]. In practice, the study provides a replicable and cost-effective blueprint for organizations seeking to implement near-real-time KPI monitoring systems that ensure data consistency, operational visibility, and user adoption in distribution-intensive environments.

The remainder of this paper is organized as follows. Section 2 reviews related literature on BI architectures, KPI monitoring, real-time analytics, and usability evaluation. Section 3 presents the proposed multi-layer BI framework. Section 4 describes the research methodology. Section 5 reports the implementation and evaluation results. Section 6 discusses the theoretical and practical implications, and Section 7 concludes by outlining the limitations and directions for future research.

2. Literature Review

2.1 Business Intelligence and KPI monitoring

BI has evolved from static reporting tools into an integrated set of technologies and processes that transform heterogeneous raw data into actionable insights to support strategic and operational decision making [16, 17]. Contemporary BI platforms typically encompass data warehousing, online analytical processing, dashboard visualization, and analytical services that enable organizations to monitor performance through quantifiable indicators, commonly formalized as KPIs [13, 18]. KPIs function as a mechanism for translating organizational objectives into measurable targets, thereby supporting continuous performance monitoring, benchmarking, and alignment across organizational units [18, 19].

In distribution and logistics-intensive sectors, BI-driven KPI monitoring plays a particularly critical role due to high demand variability, multi-echelon inventory structures, and geographically dispersed operations [20]. Prior studies indicate that BI-supported KPI systems can reduce reporting latency, improve visibility into operational bottlenecks, and enhance responsiveness to disruptions across supply and distribution networks [21]. Nevertheless, many organizations continue to rely on spreadsheet-based reporting practices, which remain susceptible to data inconsistencies, manual entry errors, version control conflicts, and limited support for near real-time analytics [22-24]. These limitations constrain managers’ ability to identify anomalies early and to coordinate corrective actions across multiple divisions.

Although the literature broadly acknowledges the value of BI in enabling KPI-driven management, most existing studies conceptualize KPI dashboards primarily as end products rather than as components of an integrated analytical architecture. KPI computation logic is often embedded directly within extract, transform, and load routines or dashboard configurations, thereby reducing transparency, complicating maintenance, and limiting cross-divisional reuse. As a result, BI implementations frequently emphasize visualization outputs while underspecifying the architectural mechanisms that govern KPI definition, calculation consistency, and rule enforcement across organizational contexts [21, 25, 26].

This architectural limitation is particularly problematic in multi-divisional distribution environments, where performance indicators differ across sales, inventory, logistics, and human resource functions but must still adhere to a shared performance logic. Without explicit architectural support for standardized KPI computation and governance, inconsistencies in metric interpretation can emerge, undermining organizational alignment and decision reliability. Consequently, there remains a need to reconceptualize BI systems not merely as reporting tools, but as structured architectures that explicitly integrate data sources, transformation logic, KPI computation rules, and visualization layers into a coherent decision support ecosystem.

2.2 Business Intelligence frameworks and multi-layer architectures

To manage increasing data volume and analytical complexity, both researchers and practitioners have proposed various multi-layer BI architectures. These architectures are commonly structured into distinct layers that separate data acquisition, integration, semantic processing, and presentation functions [27]. Traditional data warehouse architectures, for example, distinguish between the data source layer where operational data are generated, the data integration or extract, transform, load layer that handles data preprocessing, the semantic or warehouse layer that stores cleansed and integrated data, and the presentation layer that delivers analytical results to end users through reporting and visualization tools [28, 29]. Several studies have extended these foundational architectures by introducing additional layers or components, such as metadata management, business rules, or analytical services, in order to support more advanced decision-making capabilities [30]. In performance management contexts, BI frameworks are often augmented with strategy maps, balanced scorecards, or KPI catalogs that aim to align operational indicators with higher-level organizational objectives [31, 32]. While these extensions enhance strategic alignment, many proposed frameworks remain conceptual and do not explicitly formalize the KPI logic layer where business rules, threshold definitions, and anomaly detection mechanisms are executed. In many existing BI implementations, KPI computation rules are embedded within ad hoc SQL scripts, extract, transform, load procedures, or dashboard configuration logic. This tightly coupled approach reduces transparency and limits reusability, as changes to KPI definitions often require modifications across multiple system components. Furthermore, such implementations hinder auditability and make it difficult to enforce consistent interpretation of KPIs across organizational units. As a result, KPI logic is frequently treated as an implementation detail rather than as a core architectural concern. These limitations become more pronounced in multi-divisional distribution firms, where performance indicators vary significantly across sales, inventory management, logistics operations, and human resources functions [33, 34]. In these environments, each division may require different aggregation levels, alert thresholds, and visualization formats while still contributing to a unified organizational performance narrative. However, most existing BI architectures either adopt sector-neutral designs or focus on domain-specific applications in areas such as healthcare, manufacturing, or finance [35, 36]. Consequently, there is limited guidance on how to architect BI systems that explicitly accommodate the coordination and governance challenges inherent in distribution-oriented organizations. This gap highlights the need for a multi-layer BI framework that explicitly incorporates a dedicated KPI logic and rules engine, semantic aggregation mechanisms tailored to multiple organizational roles, and near real-time visualization capabilities. Such an architecture is necessary to bridge the gap between high-level BI concepts and their practical implementation in operationally intensive distribution contexts.

2.3 Real-time KPI monitoring and decision support systems

Real-time or near-real-time BI has gained increasing attention as organizations seek to shorten the latency between data generation, analysis, and decision-making [37]. In contrast to traditional batch reporting, real-time BI enables continuous monitoring of KPIs, immediate identification of deviations from expected performance, and timely intervention in dynamic environments. In distribution networks, real-time visibility into sales, stock levels, delivery performance, and workforce activity is critical for mitigating stockouts, capacity bottlenecks, and service-level violations [25, 38-40].

Decision Support Systems (DSS) historically focused on model-driven or data-driven approaches to aid managerial decision-making [30, 41]. With the advent of BI and analytics platforms, contemporary DSS increasingly leverage dashboards and KPI-driven interfaces as primary decision-support channels [11]. Recent studies highlight that situational awareness and early anomaly detection are key benefits of real-time KPI dashboards, particularly in domains such as power systems, supply chain operations, and emergency management [42, 43]. However, many reported systems rely on static thresholds, lack explicit integration with multi-layer BI architectures, or provide limited support for role-based decision support, where different user groups require different views of the same underlying data [44, 45].

Furthermore, although some works have introduced rule-based or machine-learning-based mechanisms for anomaly detection within KPI streams, these methods are often applied in isolated analytic pipelines rather than embedded within a holistic BI framework that spans from data capture to decision support [13, 46]. As a result, there remains a need for integrated approaches that combine real-time KPI monitoring, rule-based anomaly detection, and role-specific decision support within a unified architectural model, particularly in complex distribution operations.

2.4 Usability and human–computer interaction in Business Intelligence systems

Although the technical capabilities of BI systems have advanced significantly, their practical effectiveness remains highly dependent on usability and user experience. BI systems typically serve a diverse group of users, including executives, middle managers, analysts, and operational staff, each with different levels of technical expertise and distinct information needs. Poorly designed dashboards can lead to misinterpretation of performance indicators, underutilization of analytical features, and resistance to organizational adoption, even when the underlying data infrastructure is robust [47].

HCI research has long emphasized usability as a critical determinant of system effectiveness. Among the most widely adopted evaluation frameworks are Nielsen’s Ten Usability Heuristics, which provide structured criteria for assessing interface quality during early and iterative design stages [48, 49]. These heuristics include principles such as system status visibility, consistency, error prevention, recognition rather than recall, and support for user control. When applied systematically, heuristic evaluation enables the identification of usability issues that may impede efficient task execution and comprehension.

Several studies have applied heuristic evaluation methods and standardized usability instruments, such as the System Usability Scale, Questionnaire for User Interaction Satisfaction, and User Experience Questionnaire, to assess dashboards and analytical applications [50]. Findings from this body of work consistently indicate that usability problems often stem from unclear feedback, excessive information density, inconsistent navigation structures, and insufficient user guidance. However, compared with domains such as web applications and general information systems, the application of usability-driven evaluation in BI research remains limited.

Moreover, many BI-related studies focus on technical performance (e.g., query latency, scalability, data accuracy) while treating usability evaluation as a secondary or optional activity. Only a limited number of works explicitly integrate usability evaluation into the BI development lifecycle, for example, by using heuristic evaluation to refine dashboard design iteratively or by combining scenario-based tasks with role-specific usability assessment [47, 51]. This gap is particularly pronounced in industrial case studies, where BI deployments often evolve through informal feedback and localized adjustments rather than through systematic, theory-informed usability evaluation.

Although the studies summarized in Table 1 provide foundational models for BI implementation, a critical analysis reveals three persistent architectural limitations that constrain their applicability in distribution operations.

Table 1. Summary of prior Business Intelligence (BI) framework studies and identified research gaps

Ref.

Context / Domain

Architecture Focus

KPI Logic Explicit?(Y/N)

Usability Eval? (Y/N)

Main Limitation / Gap Identified

[54]

Data Mart, Academic Productivity

Metodologi Ad-hoc, ETL+V, PDI/Qlikview

Y (Template/ SQL)

N/A

Less formal methodology (not repeatable)

[55]

BI Cost Accounting (Port Sector)

Metode EIS LC, ETL (PDI), Power BI

Y

(Profit/ Loss, Expenses)

N/A

Manual reporting (Excel) is not real-time

[56]

Improving BI Dashboard Usability

Kimball + Design Thinking (DT) & UX Laws

Y

(COVID-19 Metrics)

Y

(SUS, Time on Task)

Lack of HCI research in BI/DT process is time-consuming

[57, 58]

Data Vault Modeling Automation (AI/LLM)

Data Vault 2.0 is powered by AI (LLM)

N/A

Y (Validity Coefficient)

LLM stability affects the repeatability of the results

[59]

BIM + BI + HoloLens 2 (Facilities Management)

Dashboard Power BI + AR

Y

(Asset Cycle Cost)

Y (PSSUQ, Expert)

Traditional tools of steep learning curves

[60]

KPI Implementation (Automotive)

KPQ-based KPI methodology, Power BI

Y

(22 new KPIs)

Y

The company avoids technically complex methodologies

[61]

Smart OLAP terdistribusi (e-government)

Hybrid Model (OLAP, Blockchain, NN)

N/A

N/A

Traditional OLAP is ineffective in dealing with Big Data/Security

[62-65]

DW Music Sales Analysis

Kimball 9-Step, Star Schema, ETL (PDI)

N/A

N/A

Slower SQL manual reporting

[66, 67]

DSS for Retail Systems

Star Schema, Pentaho BI

N/A

N/A

Reliance on OLTP hampers strategic analysis

[68-70]

DW Contractor System (Budget)

Star Schema, ETL (PDI 6.1), CDE Dashboard

Y

(Project Budget)

N/A

Manual Excel methods are time-consuming

[71, 72])

Efficient DW Construction (Big Data)

MySQL + Hadoop-based DW

N/A

Y (Data loading test)

Low efficiency on traditional DW (Big Data)

[73-75]

DW Electromagnetic Environment (Big Data)

Relational & Non-Relational DB Integration

N/A

N/A

The limitations of traditional DB face heterogeneous data and high load

[70, 76]

Optimasi ETL Cloud (Big Data)

Hybrid Optimization (GWO/TS) for dimension reduction

N/A

N/A

Redundancy and high data dimension

First, regarding KPI Logic, existing frameworks such as [52, 53] typically embed KPI computation rules directly within SQL scripts or visualization widgets. This tightly coupled approach limits auditability and reusability, as modifying KPI definitions often requires changes across multiple system components. In contrast, the proposed framework explicitly decouples KPI computation into a dedicated, reusable KPI logic layer, enabling consistent rule management across divisions.

Second, regarding Real-Time Capability, prior studies, including, predominantly rely on batch-oriented ETL processes that are suitable for periodic reporting but are insufficient for the continuous monitoring demands of logistics and distribution operations. These architectures often reduce reporting delays without explicitly addressing end-to-end latency, particularly in environments still dependent on spreadsheet-based data aggregation.

Third, regarding Domain Adaptability, many reviewed architectures focus on generic SME contexts or on sector-specific applications, such as facilities management. As a result, they do not explicitly accommodate the multi-divisional coordination challenges inherent in distribution enterprises, where Sales, Logistics, Inventory, and HR KPIs must be synchronized under a unified performance logic. This study addresses this gap by introducing a role-based semantic aggregation layer tailored to the operational structure of distribution firms.

2.5 Synthesis of research gaps

Based on the reviewed literature, several interrelated gaps can be identified:

  1. Architectural Gap

Existing BI frameworks typically describe generic multi-layer architectures but rarely model a distinct KPI logic layer that encodes business rules, alert thresholds, and anomaly-detection mechanisms in a reusable, maintainable way. The limited attention to KPI-centric architectures is particularly evident in multi-divisional distribution settings.

  1. Domain-Specific Gap (Distribution Sector)

Few studies provide an integrated BI framework specifically designed for building materials distributors or similar multi-division distribution firms, in which KPI structures differ across sales, inventory, logistics, and HR divisions yet must still be governed by a shared performance logic.

  1. Real-Time Decision Support Gap

Although real-time KPI dashboards are increasingly discussed, many implementations focus on visualization and do not situate KPI monitoring within a broader decision-support architecture that explicitly connects data sources, ETL processes, rule-based anomaly detection, and role-based visualization.

  1. Usability Evaluation Gap

While usability is widely recognized as a critical factor in BI adoption, systematic usability-driven evaluation, particularly through established Human Computer Interaction methods such as Nielsen’s heuristics, combined with scenario-based task evaluation, remains underrepresented in the BI literature. Few studies position usability evaluation as a core component of BI framework validation.

This study responds to these gaps by:

  1. Proposing a multi-layer BI framework that explicitly incorporates a KPI logic and rules engine, semantic aggregation, and role-specific visualization into a unified architectural model for real-time KPI monitoring.
  2. Applying and reporting a usability-driven evaluation that uses heuristic assessment and scenario-based testing with multiple organizational roles in an industrial distribution case study.

With this foundation, the next section presents the proposed multi-layer BI framework. It describes how it operationalizes these concepts into an implementable architectural model for real-time KPI monitoring and decision support.

3. Proposed Multi-Layer Business Intelligence Framework

To overcome the limitations of fragmented data silos and manual reporting described in the literature review, this study proposes a multi-layer BI framework. This architecture is specifically designed to support real-time KPI monitoring in multi-divisional distribution firms by leveraging a low-code development stack. Unlike monolithic BI approaches that treat data processing and visualization as a single opaque block, this framework decouples the system into five distinct, interoperable layers: Data Ingestion, ETL & Transformation, KPI Logic & Rules Engine, Semantic Aggregation, and Role-Based Visualization. This layered abstraction ensures scalability, auditability, and the ability to accommodate diverse operational requirements. The comprehensive architectural blueprint is presented in Figure 1, illustrating the vertical flow from raw data acquisition to decision support.

A screenshot of a computer</p>
<p>AI-generated content may be incorrect.

Figure 1. Multi-layer Business Intelligence (BI) framework for real-time KPI monitoring

3.1 Layer 1: Data ingestion layer

The framework's foundation is the Data Ingestion Layer, which addresses the challenge of operational heterogeneity. In a typical SME distribution context, data is generated asynchronously across various functional units. This layer functions as a centralized "sink" that captures raw data streams without altering their intrinsic value.

  1. Sales Data

Transactional logs (e.g., invoices, order values) extracted from point-of-sale spreadsheets.

  1. Inventory Data

Stock movement snapshots from SQL-based warehouse management systems.

  1. Logistics Data

Delivery timesheets and fleet assignment logs.

  1. HR Data

Attendance records and workforce activity logs.

The primary objective of this layer is to establish a reliable connection pipeline—using API connectors or scheduled batch uploads—to ensure that the subsequent analytical layers are fed with continuous, up-to-date information.

3.2 Layer 2: ETL and transformation layer

Raw data from SMEs is inherently unstructured and prone to errors. The ETL (Extract, Transform, Load) Layer serves as the system's data quality firewall. Instead of merely moving data, this layer applies rigorous algorithmic transformations to ensure integrity before any metric is calculated. The transformation logic includes sanitization to remove duplicates and null values, normalization to standardize date and currency formats (ISO 8601), and enrichment to augment transaction data with master attributes, such as product categories and regional hierarchies. Furthermore, derived metrics—such as lead-time calculations—are computed at this stage to serve as inputs for high-level KPIs. A summary of the transformation rules applied to ensure semantic consistency is provided in Table 2.

Table 2. Summary of ETL transformation rules

ETL Process

Description

Rules / Operations Applied

Output Data Fields

1. Data Cleaning

Removes inconsistencies, duplicates, and invalid values from raw inputs

  • Remove duplicate transaction IDs
  • Handle missing/null values
  • Validate numeric fields (quantity, price, delay hours)
  • Correct date formats (DD/MM → YYYY-MM-DD)

Cleaned raw tables (sales_clean, inv_clean, logistic_clean)

2. Standardization

Ensures uniform formats across divisions and data sources

  • Standardize product codes
  • Normalize customer type labels
  • Convert currency if applicable
  • - Normalize time units (minutes → hours)

Standardized fact tables (sales_std, inv_std, delivery_std)

3. Data Enrichment

Joins reference/master data to add business attributes

  • Join item master (category, product line)
  • Join branch/division hierarchy
  • Add working-day calendar
  • Add SLA thresholds

Enriched tables (sales_enh, inv_enh, logistic_enh)

4. Derived Metrics Calculation

Computes pre-KPI metrics required for the KPI engine

  • Sales value = qty × price
  • Stock accuracy = (valid stock/system stock) × 100
  • Delay category = IF Leadtime > SLA THEN “Late”
  • Attendance compliance = (present days/working days)

Metric-ready fact tables (sales_m, inv_m, logistic_m, hr_m)

5. Business Rule Application

Applies pre-defined logic before KPI computation

  • Reclassify customer types
  • Flag negative stock events
  • Categorize delay severity
  • Identify invalid transactions

Rule-applied tables (sales_r, inv_r, logistic_r)

6. Data Aggregation

Creates aggregated daily/weekly/monthly summary tables

  • SUM sales by product/branch/region
  • AVG lead time by courier/route
  • COUNT anomalies per division
  • DAILY/WEEKLY/MONTHLY roll-ups

Aggregated summaries (sales_aggr, inv_aggr, logistics_aggr)

7. Load to Semantic Store

Stores transformed data into a semantic BI layer

  • Load to KPI semantic tables
  • Store role-based classification
  • Upload refresh metadata

Semantic-ready datasets (kpi_sales, kpi_inv, kpi_logistics, kpi_hr)

3.3 Layer 3: KPI logic and rules engine

This layer constitutes the theoretical and operational core of the proposed multi-layer BI framework. A critical limitation observed in many existing BI implementations is the direct embedding of KPI formulas within visualization components or ad hoc SQL queries, which constrains reusability, auditability, and cross-divisional consistency. To address this issue, the proposed framework introduces a dedicated KPI Logic and Rules Engine that explicitly formalizes performance computation and decouples KPI processing from the presentation layer.

Operationally, the KPI Logic and Rules Engine functions as an intermediary execution layer between the ETL and transformation layer and the semantic aggregation layer. Once the ETL pipeline has produced standardized, metric-ready datasets, the KPI engine is activated via a scheduled refresh cycle. During each execution cycle, the engine performs two primary functions, namely standardized KPI computation and rule-based anomaly detection.

First, beyond numerical calculation, the KPI Logic and Rules Engine applies deterministic evaluation rules to assess performance states. Each KPI is associated with predefined threshold values derived from operational targets and historical performance benchmarks. Computed KPI values are systematically evaluated against these thresholds to classify performance conditions, such as normal, warning, or anomalous states. For example, inventory deviations exceeding predefined tolerance levels or delivery delays surpassing acceptable time windows are automatically flagged as anomalies. The scoring logic and threshold classifications applied in this study are summarized in Table 3.

Table 3. KPI scoring and threshold classification rules

KPI

Green (Good)

Yellow (Moderate)

Red (Poor)

Sales Achievement

≥ 95%

80–94%

< 80%

Stock Accuracy

≥ 98%

95–97%

< 95%

On-Time Delivery (OTD)

≥ 90%

75–89%

< 75%

Complaint Rate

< 2%

2–5%

> 5%

Attendance Compliance

≥ 95%

85–94%

< 85%

Dead Stock Ratio

< 5%

5–15%

> 15%

Delay Duration

< 2 hours

2–6 hours

> 6 hours

Second, the engine executes standardized mathematical definitions to ensure that each KPI is computed consistently across organizational levels and functional units. By centralizing KPI formulas within this layer, the framework ensures uniform interpretation of performance indicators across user roles and visualization contexts. This architectural separation improves traceability, simplifies rule maintenance, and supports governance of performance metrics in complex organizational settings. The formal KPI definitions and computation rules implemented in this study are detailed in Table 4.

Table 4. KPI definitions and mathematical formulas

KPI Name

Description / Purpose

Formula

Unit

Sales Achievement

Measures the percentage of target sales achieved

$\frac{\text { Actual Sales }}{\text { Target Sales }} \times 100$

%

Average Order Value (AOV)

Average sales value per customer order

$\frac{\text { Total Sales Value }}{\text { Number of Orders }}$

IDR

Stock Accuracy

Compares physical stock with system stock

$\frac{\text { Valid Stock }}{\text { System Stock }} \times 100$

%

Dead Stock Ratio

Identifies items with no movement for a period

$\frac{\text { Dead Stock Items }}{\text { Total Items }} \times 100$

%

On-Time Delivery (OTD)

Measures delivery punctuality

$\frac{\text { On-Time Deliveries }}{\text { Total Deliveries }} \times 100$

%

Delay Duration

Average delivery delay vs SLA

Actual Lead Time - SLA Lead Time

Hours

Complaint Rate

Frequency of customer complaints

$\frac{\text { Total Complaints }}{\text { Number of Orders }} \times 100$

%

Attendance Compliance

Percentage of days employees are present

$\frac{\text { Present Days }}{\text { Working Days }} \times 10$

%

When anomalies are detected, explicit status flags are generated and propagated to the Semantic Aggregation Layer, where they are contextualized according to organizational roles and reporting hierarchies. This separation of computation, rule evaluation, and semantic interpretation enhances transparency, facilitates auditing of KPI logic, and enables scalable adaptation of business rules without modifying dashboard configurations. The execution flow of the KPI Logic and Rules Engine and its interaction with adjacent layers are illustrated in Figure 2.

Figure 2. Data flow and anomaly detection logic sequence

3.4 Layer 4: Semantic aggregation layer

While Layer 3 computes values, Layer 4 constructs meaning. The Semantic Aggregation Layer contextualizes the computed KPIs based on the user's organizational perspective. It solves the "Information Overload" problem by filtering and grouping data into logical dimensions.

A diagram of a diagram</p>
<p>AI-generated content may be incorrect.

Figure 3. Semantic aggregation model

As illustrated in the Semantic Aggregation Model (Figure 3), this layer processes data through three-dimensional filters:

  1. Functional Dimension: Grouping metrics by domain (e.g., Sales, Logistics, HR).
  2. Temporal Dimension: Aggregating data into meaningful time horizons (Daily Snapshots for operations vs. Monthly Trends for strategy).
  3. Role-Based Dimension: Determining data granularity (e.g., restricting sensitive margin data to Directors while showing volume data to Supervisors).

3.5 Layer 5: Role-based visualization layer

The final layer is the user-facing interface, designed to maximize Cognitive Fit—the correspondence between the data presentation and the user's mental model. Rather than a "one-size-fits-all" dashboard, this layer generates specialized views as depicted in the User Interface Architecture (Figure 4).

  1. Strategic View (Director): Focuses on high-level aggregate trends, financial health, and cross-divisional comparisons.
  2. Tactical View (Manager): Focuses on branch performance, resource allocation, and exception management.
  3. Operational View (Staff): Focuses on task lists, immediate alerts, and daily compliance metrics.

By structuring the visualization based on the Semantic Layer's output, the system ensures that each user role receives actionable intelligence relevant to their specific scope of authority, thereby enhancing decision speed and accuracy.

A screenshot of a computer</p>
<p>AI-generated content may be incorrect.

Figure 4. User interface architecture and component interactions

4. Research Methodology

This study adopts a mixed-methods design–evaluation approach that integrates systems engineering principles for framework development with HCI-oriented evaluation techniques to assess dashboard usability. The methodological sequence consists of four major phases: research design, case study context and data collection, system development through an iterative BI lifecycle, and usability-driven evaluation. This structured approach ensures both technical rigor and practical relevance, allowing the proposed BI solution to be examined in a realistic organizational environment.

4.1 Research design

This study employs a single-case embedded design, an approach widely recognized in information systems research for investigating complex organizational phenomena where context and multi-role interactions are critical. The research design integrates two complementary methodological strands: a Design Science Research (DSR) approach for system development and a Human-Computer Interaction (HCI) oriented evaluation. As illustrated in Figure 5, the research workflow consists of four primary phases: diagnosis and data collection, iterative system development, and a usability-driven evaluation. This methodological integration ensures that the resulting BI framework is not only architecturally robust but also cognitively aligned with the practical needs of decision-makers in a resource-constrained environment.

A diagram of a process</p>
<p>AI-generated content may be incorrect.

Figure 5. Research workflow / methodological framework

4.2 Case study context

The empirical setting for this study is a multi-divisional building materials distribution firm operating across sales, logistics, warehouse management, human resources (HR), and finance. The organization generates substantial volumes of data from high-frequency sales transactions, multi-location warehouse operations, and daily delivery logistics. This context is ideal for evaluating BI integration because the firm historically relied on decentralized spreadsheet-based reporting and manual data aggregation. These legacy practices resulted in significant data latency, inconsistent KPI interpretations, and delayed decision-making, common challenges in Small and Medium Enterprises (SMEs) in developing economies.

4.3 Data collection procedures

Data collection used a triangulation approach with three main sources to ensure accuracy. First, operational data were pulled from raw datasets, including daily sales reports, inventory movement logs, delivery schedules, and attendance records. These datasets provided the basis for building the ETL pipeline and calculating KPI formulas. Second, semi-structured interviews were held with key stakeholders, such as sales supervisors, warehouse managers, logistics coordinators, and division heads. These interviews focused on defining core KPIs, identifying operational challenges, and setting dashboard requirements. Third, ongoing user feedback was collected during the iterative prototyping stage to improve dashboard layouts, color coding, and drill-down interaction paths.

4.4 System development process

The BI system was developed through an iterative lifecycle that integrated ETL modeling, KPI rule formulation, and role-based visualization design. The process began with Requirements Analysis, in which functional requirements were derived from process mapping and stakeholder interviews. Non-functional requirements, such as real-time accuracy and low cognitive load, were also established.

Subsequently, the ETL Pipeline Construction phase focused on harmonizing heterogeneous datasets. This process involved cleaning key fields, standardizing formats, and applying business logic to generate consolidated datasets for the semantic layer. The specific transformation rules applied to ensure data integrity are detailed in Table 2 (Section 3).

Following data preparation, the KPI Rule Engineering phase focused on encoding the performance logic. Crucially, the determination of specific alert thresholds and performance benchmarks, as summarized in Table 3, was not arbitrary. These parameters were established through a triangulation method combining: (1) the identification of logistics and distribution performance indicators with reference to established frameworks like the SCOR model, (2) the analysis of 12 months of historical operational data to determine typical performance ranges, and (3) expert consultations with managerial stakeholders to refine KPI definitions and threshold levels.

Based on this foundation, each performance indicator was computed using the standardized mathematical definitions and formulas presented in Table 4. To automate the identification of performance outliers, the strictly defined threshold rules (Green/Yellow/Red) were mapped to each KPI as detailed in the same table. These rules were subsequently encoded into transformation scripts and rigorously validated through cross-divisional checks to ensure calculation accuracy before deployment to the visualization layer.

Figure 6. Technical technology stack architecture

The final development stage was Dashboard Development, which focused on creating specialized views for three distinct user roles: Directors (strategic view), Supervisors (tactical view), and Operational Staff (execution view). The interface design adhered to principles of visual minimalism and consistency and was implemented in Google Looker Studio to ensure accessibility and cost-effectiveness. The user interface architecture and component interactions are depicted in Figure 4.

4.4.1 Technology stack and implementation details

To ensure replicability and minimize capital expenditure (CAPEX) for the target SME, the system utilizes a cloud-native, low-code technology stack. The specific technological components selected for each architectural layer are detailed below and illustrated in Figure 6.

  1. Data Storage Layer (Database)

Google Sheets was selected as the primary data warehouse due to its native integration with operational inputs (CSV/Excel) and zero licensing cost. For scalability and historical archiving, the system connects to Google BigQuery.

  1. ETL & Logic Layer

Google Apps Script functions as the middleware. It executes the ETL routines (cleaning and validation) and hosts the KPI Rules Engine scripts described in Section 3.3. This serverless environment enables automated, trigger-based execution (e.g., time-driven triggers at 15-minute intervals) without dedicated infrastructure.

  1. Visualization Layer

Google Looker Studio serves as the front-end interface, chosen for its drag-and-drop flexibility and ability to render real-time data from the Google ecosystem without connector latency.

This selection is justified by the resource-constrained nature of the case study environment, in which high-maintenance enterprise ERP systems are often financially infeasible.

4.5 Usability-driven evaluation

To validate the system's effectiveness, a two-phase usability assessment was conducted. The first phase was a Heuristic Evaluation performed by three trained evaluators using Nielsen’s Ten Usability Heuristics. Each identified issue was assigned a severity score (0–3) to prioritize interface refinements.

The second phase involved a scenario-based usability evaluation conducted with 15 participants selected through purposive sampling to represent the system's intended user strata. The participant pool consisted of two Directors evaluating the strategic dashboard, five Divisional Managers evaluating the tactical dashboard, and eight Operational Staff evaluating the operational dashboard.

Participants were asked to complete realistic, role-specific usage scenarios that reflected routine decision-making and monitoring tasks. The evaluation focused on task completion, clarity of information presentation, ease of navigation, and qualitative user feedback. Consistent with the formative nature of the study, the objective was to identify usability issues and assess practical suitability rather than to perform inferential statistical analysis.

4.6 Data analysis techniques

Data analysis employed both quantitative and qualitative methods. Quantitative analysis focused on descriptive statistics of task performance metrics and on aggregating heuristic severity scores. Qualitative analysis involved thematic coding of evaluator comments and user feedback. The triangulation of these results provides robust evidence regarding the system’s usability and its impact on decision-making efficiency.

4.7 Methodological rigor and validity

To ensure research rigor, this study addresses construct validity by using standardized KPI formulas and established heuristic guidelines. Internal validity is supported by the consistency of findings across different user roles, while reliability is maintained through replicable ETL processes and rule definitions. Although the single-case design limits broad generalization, the detailed description of the architectural implementation offers a transferrable blueprint for similar distribution enterprises.

5. Results

This section presents the implementation results of the proposed multi-layer BI framework and the empirical evaluation conducted through heuristic assessment and scenario-based performance testing. The results demonstrate how the framework enables real-time KPI monitoring, improves information consistency across divisions, and enhances usability for multiple organizational roles.

5.1 System implementation results

5.1.1 ETL integration and KPI data consolidation

The implementation of the ETL pipeline successfully consolidated heterogeneous datasets from sales, inventory, logistics, and HR divisions into a unified semantic repository. A primary outcome of this integration was the significant reduction in data latency. While the legacy manual compilation process required a 24-hour cycle to generate reports, the automated pipeline achieved a refresh interval of 15–30 minutes, enabling near real-time visibility.

In the context of this study, the term “real-time” is operationally defined as near-real-time data latency sufficient to support tactical and operational decision-making cycles in distribution logistics. This definition explicitly distinguishes the proposed framework from systems requiring sub-second latency, such as high-frequency trading platforms or industrial automation environments. Accordingly, the real-time capability discussed in this study refers to the availability of timely, continuously updated information rather than instantaneous event-level processing.

Furthermore, the transformation rules effectively standardized KPI data structures, facilitating cross-divisional comparisons that had previously been obstructed by inconsistent naming conventions. Data quality also improved markedly, with an estimated 78% reduction in missing, duplicate, or inconsistent fields following application of the validation logic. This consolidation ensures that all subsequent KPI computations and dashboard visualizations rely on a verified and consistent single source of truth.

To explicitly quantify the operational improvements achieved by the proposed framework, a comparative analysis was conducted against the legacy spreadsheet-based system. As summarized in Table 5, the transition to the Multi-Layer BI Framework resulted in significant gains in reporting speed and data integrity.

Table 5. Performance comparison: Legacy system vs. Multi-layer Business Intelligence (BI) framework

Performance Metric

Legacy System (Spreadsheet-Based)

Proposed Multi-Layer BI Framework

Improvement / Impact

Reporting Latency

~24 Hours (Daily manual compilation)

15–30 Minutes (Near Real-Time Refresh)

98% Reduction in information delay

Data Error Rate

High (Prone to manual entry & formula errors)

Low (Automated validation via ETL)

78% Reduction in data inconsistencies

Anomaly Detection

Reactive (Detected only after report completion)

Proactive (Automated alerts upon threshold breach)

Immediate visibility of issues (e.g., stock deviation)

Data Consistency

Fragmented (Siloed versions across divisions)

Unified (Single Source of Truth)

Eliminated cross-divisional data disputes

5.1.2 KPI logic and categorization outcomes

Within the KPI Logic Layer, the system successfully encoded the standardized formulas and thresholds defined in the proposed framework. This implementation activated 27 standardized KPIs across operational functions. The rule-based anomaly detection engine proved effective in identifying performance outliers that were often missed in manual reports. During the pilot testing phase, the system automatically flagged several critical anomalies, including a sudden 42% drop in daily sales for a specific division and an isolated stock variance of 9.4% in the warehouse. Additionally, the system identified a sharp increase in delivery delays exceeding the two-hour threshold during a specific shift. These automated alerts, subsequently confirmed by divisional managers, validate the practical utility of the deterministic rule-based engine for providing early warnings without the complexity of black-box predictive models.

5.1.3 Role-based dashboard implementation

The visualization layer was deployed through three distinct dashboard interfaces designed to match the cognitive requirements of Directors, Supervisors, and Operational Staff. The Strategic Dashboard, designed for upper management, provides a high-level overview of organizational health. As shown in Figure 7, this interface utilizes aggregated gauge charts and trend lines to visualize cross-divisional performance, allowing directors to instantly assess financial health and major KPI achievements against monthly targets.

In contrast, the Operational Dashboard focuses on task-level granularity for logistics and warehouse staff. As illustrated in Figure 8, this view prioritizes immediate actionability by displaying tabular lists of pending deliveries, stock exceptions, and daily compliance metrics, with clear color-coded alerts (Red, Yellow, Green). This distinct separation of views ensures that users are not overwhelmed by irrelevant data, thereby minimizing cognitive load and facilitating faster response times to operational issues. By aligning information density with execution-level responsibilities, the dashboard supports rapid situational awareness, reduces search effort during routine monitoring tasks, and enables frontline staff to initiate timely corrective actions without requiring additional analytical interpretation.

A screenshot of a computer</p>
<p>AI-generated content may be incorrect.

Figure 7. Implementation of the strategic dashboard

A screenshot of a data display</p>
<p>AI-generated content may be incorrect.

Figure 8. Operational dashboard

5.2 Heuristic evaluation results

The usability of the developed dashboards was assessed by three trained evaluators using Nielsen’s Ten Usability Heuristics. The evaluation aimed to identify interface design issues that could hinder user interaction. The aggregated severity scores for each heuristic are summarized in Table 6.

Table 6. Detailed heuristic evaluation results and recommended improvements

No.

Heuristic

Severity Score*

Issue Description / Observation

Level

Recommended Improvements

1

Visibility of System Status

0.7

Some KPI updates lacked immediate visual confirmation.

Minor

Add micro-feedback (loading states, update indicators).

2

Match Between System and Real World

0.3

Terminology (e.g., “leadtime_class”) not intuitive for non-technical users.

Minor

Replace technical labels with business-friendly terms.

3

User Control and Freedom

1.0

Users needed multiple steps to exit drilldown screens.

Moderate

Provide clear “Back” button and breadcrumb navigation.

4

Consistency and Standards

0.7

Inconsistent date formats across dashboard panels.

Minor

Standardize timestamps (ISO-based, or company standard).

5

Error Prevention

0.5

No warnings before applying filters that drastically change results.

Minor

Add confirmation dialog for destructive filter operations.

6

Recognition Rather Than Recall

1.3

Some KPI icons not self-explanatory; required hovering to understand.

Moderate

Use clearer icons, add short labels beneath icons.

7

Flexibility and Efficiency of Use

1.0

Experts wanted shortcuts; novices found layout overwhelming.

Moderate

Add quick-access shortcuts; simplify default view.

8

Aesthetic and Minimalist Design

0.5

KPI cards were dense; color usage slightly inconsistent.

Minor

Declutter panels; standardize color palette.

9

Help Users Recognize, Diagnose, Recover Errors

1.5

Error messages (e.g., data unavailable) lacked actionable guidance.

Moderate

Provide clear cause + next steps in error messages.

10

Help and Documentation

1.2

No embedded help or quick tips for new users.

Moderate

Add tooltip-based help or onboarding walkthrough.

The analysis reveals that the system exhibits a high degree of consistency and visual minimalism, as indicated by the low severity scores for "Match between system and the real world" (H2) and "Aesthetic and minimalist design" (H8). Evaluators praised the clean layout and the use of familiar business terminology. However, moderate issues were identified in "Recognition rather than recall" (H6) and "Error recovery support" (H9). Specifically, evaluators noted that certain drill-down paths required users to remember prior filter selections, and that the mechanism for undoing complex filter applications was not immediately intuitive. These findings, visualized in the severity ranking chart in Figure 9, provided actionable guidance for refining the navigation flow prior to the final deployment.

A screenshot of a computer</p>
<p>AI-generated content may be incorrect.

Figure 9. Ranked severity analysis of usability issues based on Nielsen’s ten heuristics

5.3 Scenario-based performance testing

To validate the system's effectiveness under realistic working conditions, scenario-based testing was conducted with participants representing directors, supervisors, and operational staff. Users performed five predefined tasks ranging from simple information retrieval to complex root-cause analysis. The performance metrics, including completion time and success rates, are detailed in Table 7.

Table 7. Scenario-based usability testing tasks and performance metrics

Task ID

Task Description

Evaluation Objective

Metrics Collected

Results (Avg)

Key Issues / Observations

T1

Identify the lowest-performing division based on KPIs

Assess the user’s ability to interpret multi-division KPI rankings

  • Completion time (s)
  • Success (Y/N)
  • Navigation path
  • Mis-clicks

14 s, 100% success

Users found the ranking list intuitive; no errors.

T2

Detect an anomaly in stock accuracy for the last month

Evaluate anomaly detection comprehension

  • Completion time (s)
  • Errors
  • Accuracy of anomaly identification

32 s,

86% success

Some users are confused by the anomaly map's color scale.

T3

Find the top-performing salesperson in the current quarter

Assess the ability to drill down hierarchically

  • Completion time (s)
  • Drill-down steps
  • Errors

18 s, 100% success

Drill-down interaction is well understood.

T4

Identify the root cause of delivery delays

Evaluate cross-KPI navigation and causal inspection

  • Completion time (s)
  • Success (Y/N)
  • Sequence of clicks, - Error count

41 s,

79% success

Users struggled with cross-page transitions.

T5

Generate a director-level performance summary

Assess synthesis skills across multiple KPIs

  • Completion time (s)
  • Error rate
  • Ability to locate higher-level summaries

59 s,

93% success

Longer navigation; summary panel location unclear.

The results confirm the system's functional viability, with an average success rate exceeding 90% across all tasks. High-frequency tasks, such as identifying the lowest-performing division (T1) and finding top sales performers (T3), were completed in under 20 seconds, indicating high interface efficiency. However, tasks involving multi-step synthesis, such as identifying the root cause of delivery delays (T4), took longer (average 41 seconds) and had a slightly lower success rate (79%). This aligns with the heuristic evaluation findings regarding drill-down complexity. As illustrated in Figure 10, while basic monitoring tasks are highly efficient, deep analytical tasks impose a higher cognitive load, suggesting a need for simplified navigation in future iterations. Notably, supervisors demonstrated the fastest task completion times, likely due to their familiarity with daily operational metrics.

A screenshot of a graph</p>
<p>AI-generated content may be incorrect.

Figure 10. Scenario-based usability testing results

5.4 Synthesis of evaluation

Qualitative feedback from participants corroborated the quantitative findings. Users reported high satisfaction with the clarity of the color-coded performance indicators and with the system's ability to refresh data in near-real-time. The role-based design was particularly appreciated for reducing information overload and enabling each user group to focus on relevant metrics. Although some users suggested adding an onboarding tutorial to support advanced filtering features, the overall consensus is that the Multi-Layer BI Framework successfully bridges the gap between technical data processing and user-centered decision support.

Furthermore, an attribution-oriented analysis of the observed performance improvements suggests that the gains are primarily attributable to architectural design choices rather than digitization alone. While automation contributed to a substantial reduction in data latency (from approximately 24 hours under the legacy workflow to about 15 minutes in the proposed system), improvements in task completion performance (Table 7) are more closely linked to the introduction of the Semantic Aggregation Layer (Layer 4) and the Role-Based Visualization Layer (Layer 5).

Without these layers, users would still be exposed to high information density despite having access to near-real-time data. By selectively filtering and contextualizing KPIs according to user roles—for example, abstracting strategic financial indicators from operational dashboards—the proposed architecture reduces cognitive search effort during task execution. This observation supports the argument that the multi-layer architectural design, rather than the software tool alone, is a key contributor to improved decision-support effectiveness.

The combination of automated anomaly detection and intuitive visualization has significantly improved the organization's operational visibility and responsiveness.

6. Discussion

This section interprets the study’s findings in relation to existing literature, managerial practice, and the theoretical foundations of BI. By integrating architectural design, KPI-centric logic, and usability-driven assessment, the proposed framework contributes to three key domains: BI architectural theory, real-time KPI analytics, and user-centered decision support within industrial environments.

6.1 Theoretical implications

6.1.1 Advancing BI architectural theory through a KPI-centric model

The findings demonstrate that a BI architecture explicitly incorporating a KPI Logic & Rules Engine Layer offers distinct theoretical value beyond traditional BI models. While prior studies often treat KPI calculations as technical details embedded within ETL routines or dashboard scripts, this study elevates KPI logic to a dedicated architectural layer. This separation formalizes KPI computation as a central analytical component, clarifies the translation of strategic objectives into operational metrics, and reduces the semantic ambiguity that frequently arises in multi-divisional contexts. This refinement aligns with systems engineering principles and enriches existing BI frameworks, which typically emphasize data warehousing or visualization but under-specify the transformation of raw data into decision-relevant intelligence. Consequently, this study proposes that KPI logic should be conceptualized as a first-class, reusable, and auditable component of modern BI architectures.

Thus, the study contributes to BI theory by proposing that KPI logic should be conceptualized as a first-class, reusable, and auditable layer within BI architectures. This insight has not been adequately discussed in prior literature.

6.1.2 Integrating usability evaluation into BI research

Traditional BI research has predominantly focused on technical performance, scalability, and data accuracy, often treating usability as a secondary consideration. This study advances the discourse by systematically incorporating heuristic evaluation and scenario-based testing into the BI development lifecycle. The results confirm that even technically robust dashboards may fail to deliver decision support if usability barriers persist. By demonstrating that HCI techniques can be effectively applied to validate BI systems, this research strengthens the methodological rigor of BI evaluation. It reinforces the theoretical proposition that usability is a determining factor of analytical effectiveness.

6.2 Practical implications

6.2.1 Enhanced organizational visibility and cross-division alignment

The real-time KPI intelligence delivered by the framework proved effective in addressing the case company's long-standing reporting challenges. By consolidating and standardizing data across sales, inventory, logistics, and HR divisions, the framework enabled a synchronized understanding of performance and faster anomaly detection. Managers reported significantly improved clarity in interpreting indicators, whereas directors benefited from uniform cross-divisional comparisons that had previously been hindered by inconsistent legacy reporting. These outcomes highlight the practical value of structured BI architectures for organizations operating with fragmented data ecosystems.

6.2.2 Role-based decision support for multi-level management

The successful deployment of role-specific dashboards illustrates the importance of tailoring information granularity to user needs. The system effectively differentiated views—providing strategic overviews for directors, tactical insights for supervisors, and task-level alerts for operational staff. This differentiation reduced cognitive overload and improved decision relevance, addressing a common failure point in BI projects: "one-size-fits-all" dashboards that lead to user disengagement. The findings suggest that role-based customization is essential for enhancing adoption and managerial efficiency in multi-functional enterprises.

6.2.3 The viability of rule-based approaches for developing economies

A critical practical implication of this study is the validation of deterministic rule-based analytics as a viable alternative to high-cost AI solutions for SMEs in developing economies. While machine learning offers predictive capabilities, it requires substantial historical data, computational power, and specialized expertise—resources that are often scarce in these regions. The proposed framework demonstrates that a well-structured rule-based engine can deliver immediate operational stability, auditability, and sufficient anomaly-detection capabilities at a fraction of the cost. This finding offers a strategic pathway for SMEs to achieve digital maturity without prematurely overinvesting in complex technologies.

6.3 Comparison with prior studies

Compared with existing BI literature, which often emphasizes visualization techniques or back-end data integration in isolation, this study introduces a holistic, multi-layer architecture that integrates ETL, KPI logic, semantic models, and decision support into a unified system. Furthermore, unlike generic BI studies, this research provides a domain-specific adaptation for the distribution sector that addresses the unique complexities of multi-divisional logistics. Most notably, this study bridges the gap between systems engineering and HCI by providing empirical evidence on how usability evaluation can directly inform BI architectural refinement—an approach rarely detailed in prior industrial case studies.

6.4 Managerial insights

Three key lessons emerge for practitioners. First, standardization reduces organizational friction; harmonized data pipelines eliminate cross-division disputes over data validity and accelerate report generation. Second, real-time responsiveness is critical; the ability to detect sales drops or stock anomalies within minutes rather than days significantly strengthens operational resilience. Third, usability drives sustained adoption; the preference for the redesigned dashboards confirms that user experience quality is as critical as data accuracy. Managers are more likely to utilize analytics tools that minimize navigation effort and provide clear, actionable feedback.

6.5 Limitations

Despite its contributions, this study has limitations. First, the evaluation focused on a single organizational case, which may limit the generalizability of the findings to other industries. Second, the anomaly detection logic is rule-based rather than predictive; while effective for current operational needs, it does not forecast future trends. Third, the sample size for the usability evaluation was moderate. These limitations open opportunities for future research, such as incorporating machine learning for predictive analytics and expanding the review to a broader range of distribution firms.

7. Conclusion

This study proposed and evaluated a multi-layer BI framework designed to support real-time KPI monitoring in a multi-divisional building materials distribution firm. Addressing the specific challenges of data fragmentation and manual reporting latency often faced by SMEs in developing economies, the framework formalizes a five-layer architecture that explicitly decouples KPI logic from visualization. By treating KPI logic as a dedicated, reusable architectural component, the system ensures consistent and transparent interpretation of performance across diverse organizational divisions.

The implementation results confirm that the proposed architecture significantly enhances operational efficiency. The integration of an automated ETL pipeline and a rule-based anomaly detection engine reduced reporting latency from a 24-hour manual cycle to near-real-time updates (15–30 minutes), enabling faster managerial intervention in cases of sales declines or stock discrepancies. Furthermore, the usability-driven evaluation—combining heuristic assessment and scenario-based testing—validated that the role-based dashboards effectively reduced cognitive load and improved task completion speed for directors, supervisors, and operational staff. These findings demonstrate that a well-structured, rule-based BI approach offers a scalable and cost-effective alternative to complex predictive analytics for organizations with limited IT infrastructure.

However, this study is not without limitations. The research was conducted within a single organizational context, which may constrain the generalizability of the findings to other industrial sectors. Additionally, the anomaly detection logic relies on deterministic rules rather than predictive machine-learning models, thereby limiting its ability to forecast future market shifts. Future research should extend this work by incorporating lightweight machine learning algorithms for predictive trend analysis, expanding the evaluation across multiple firms and industries, and developing automated recommendation engines to further enhance decision support capabilities. Overall, this study provides a replicable blueprint for implementing robust, audit-ready BI systems that bridge the gap between technical architecture and user-centered design.

Acknowledgment

The authors gratefully acknowledge the financial and institutional support provided by Telkom University through the Internal Research Grant scheme under Grant No. Tel455/ITTP/2/2023.

  References

[1] Rodrigues, D., Godina, R., da Cruz, P.E. (2021). Key performance indicators selection through an analytic network process model for tooling and die industry. Sustainability, 13(24): 13777. https://doi.org/10.3390/su132413777

[2] Muntean, M., Dănăiaţă, D., Hurbean, L., Jude, C. (2021). A business intelligence & analytics framework for clean and affordable energy data analysis. Sustainability, 13(2): 638. https://doi.org/10.3390/su13020638

[3] Becerra-Godinez, J.A., Serralde-Coloapa, J.L., Ulloa-Marquez, M.S., Gordillo-Mejia, A., Acosta-Gonzaga, E. (2020). Identifying the main factors involved in business intelligence implementation in SMEs. Bulletin of Electrical Engineering and Informatics, 9(1): 304-310. https://doi.org/10.11591/eei.v9i1.1459

[4] Maltoni, R., Balzi, W., Rossi, T., Fabbri, F., Bravaccini, S., Montella, M.T., Massa, I., Bertoni, L., Falcini, F., Altini, M. (2022). Appropriateness and economic analysis of conventional circulating biomarkers assessment in early breast cancer: A real-world experience from the E. Pic. A study. Current Oncology, 29(2): 433-438. https://doi.org/10.3390/curroncol29020039

[5] Picozzi, P., Nocco, U., Pezzillo, A., De Cosmo, A., Cimolin, V. (2024). The Use of Business Intelligence Software to monitor key performance indicators (KPIs) for the evaluation of a computerized maintenance management system (CMMS). Electronics, 13(12): 2286. https://doi.org/10.3390/electronics13122286

[6] Ahmad, H., Mustafa, H. (2022). The impact of artificial intelligence, big data analytics and business intelligence on transforming capability and digital transformation in Jordanian telecommunication firms. International Journal of Data & Network Science, 6(3): 727-732. https://doi.org/10.5267/j.ijdns.2022.3.009

[7] Khan, B., Jan, S., Khan, W., Chughtai, M.I. (2024). An overview of ETL techniques, tools, processes and evaluations in data warehousing. Journal on Big Data, 6: 1-20. https://doi.org/10.32604/jbd.2023.046223

[8] Dinesh, L., Devi, K.G. (2024). An efficient hybrid optimization of ETL process in data warehouse of cloud architecture. Journal of Cloud Computing, 13(1): 12. https://doi.org/10.1186/s13677-023-00571-y

[9] Nabovati, E., Farrahi, R., Sadeqi Jabali, M., Khajouei, R., Abbasi, R. (2023). Identifying and prioritizing the key performance indicators for hospital management dashboard at a national level: viewpoint of hospital managers. Health Informatics Journal, 29(4): 14604582231221139. https://doi.org/10.1177/14604582231221139

[10] Al-Eisawi, D. (2025). Healthcare Procurement and sourcing decision-making based on analytics: Using kpi business intelligence dashboards to improve supply chain management. In International Conference on Information Management, pp. 239-248. https://doi.org/10.1007/978-3-031-99356-5_21

[11] Nabil, D.H., Rahman, M.H., Chowdhury, A.H., Menezes, B.C. (2023). Managing supply chain performance using a real time Microsoft Power BI dashboard by action design research (ADR) method. Cogent Engineering, 10(2): 2257924. https://doi.org/10.1080/23311916.2023.2257924

[12] Demirdöğen, G., Işık, Z., Arayici, Y. (2022). Determination of business intelligence and analytics-based healthcare facility management key performance indicators. Applied Sciences, 12(2): 651. https://doi.org/10.3390/app12020651

[13] Ahmad, S., Miskon, S., Alabdan, R., Tlili, I. (2020). Exploration of influential determinants for the adoption of business intelligence system in the textile and apparel industry. Sustainability, 12(18): 7674. https://doi.org/10.3390/su12187674

[14] Lutfi, A., Saad, M., Almaiah, M.A., Alsaad, A., Al-Khasawneh, A., Alrawad, M., Alsyouf, A., Al-Khasawneh, A.L. (2022). Actual use of mobile learning technologies during social distancing circumstances: Case study of King Faisal University students. Sustainability, 14(12): 7323. https://doi.org/10.3390/su14127323

[15] Jaller, C., Serafin, S. (2020). Transitioning into states of immersion: Transition design of mixed reality performances and cinematic virtual reality. Digital Creativity, 31(3): 213-222. https://doi.org/10.1080/14626268.2020.1779091

[16] Mari, F., Massini, A., Melatti, I., Tronci, E. (2021). A constraint optimization–based sense and Response system for interactive business performance management. Applied Artificial Intelligence, 35(5): 353-372. https://doi.org/10.1080/08839514.2020.1843833

[17] Gonçalves, C.T., Gonçalves, M.J.A., Campante, M.I. (2023). Developing integrated performance dashboards visualisations using Power BI as a platform. Information, 14(11): 614. https://doi.org/10.3390/info14110614

[18] Sarwindo, W., Girsang, A.S. (2019). The development of data warehouse for contractor system. International Journal of Engineering and Advanced Technology, 9(1): 2196-2205. https://doi.org/10.35940/ijeat.A9708.109119

[19] Piri, Z., Samad-Soltani, T., Elahi, S.M.H., Khezri, H. (2020). Information visualization to support the decision-making process in the context of academic management. Webology, 17(1): 216-226. https://doi.org/10.14704/WEB/V17I1/A218

[20] Biagi, V., Patriarca, R., Di Gravio, G. (2021). Business intelligence for IT governance of a technology company. Data, 7(1): 2. https://doi.org/10.3390/data7010002

[21] Guevara-Vega, C., Ayala, J., Ortiz, J., Guevara-Vega, A., Imbaquingo, D., Landeta, P. (2020). Applying Business Intelligence and KPIs to Manage a Pharmaceutical Distribution Center: A Case Study. In: Basantes-Andrade, A., Naranjo-Toro, M., Zambrano Vizuete, M., Botto-Tobar, M. (eds) Technology, Sustainability and Educational Innovation (TSIE). TSIE 2019. Advances in Intelligent Systems and Computing, vol 1110. Springer, Cham. https://doi.org/10.1007/978-3-030-37221-7_25

[22] Maulana, W.S. (2023). Application of the agile–scrum methodology to revamp KPI (key performance index) system at PT soho global health. International Journal Science and Technology, 2(1): 21-26. https://doi.org/10.56127/ijst.v2i1.562

[23] Nabibayova, G.C. (2025). Development of an architectural and technological model of a distributed smart OLAP system on the e-government platform. Journal of Electrical Systems and Information Technology, 12(1): 22. https://doi.org/10.1186/s43067-025-00218-9

[24] Ranti, K.S., Tuapattinaya, D., Chang, C., Girsang, A.S. (2020). Data warehouse for analysing music sales on a digital media store. Journal of Physics: Conference Series, 1477(3): 032013. https://doi.org/10.1088/1742-6596/1477/3/032013

[25] Hartawan, M.S., Maharani, M.D.D., Krisnanik, E., Saragih, H., Abd Rahman, A. (2022). Sustainability of key performance indicators (KPI) halal eco-tourism information system. In 2022 International Conference on Informatics, Multimedia, Cyber and Information System (ICIMCIS), Jakarta, Indonesia, pp. 514-517. https://doi.org/10.1109/ICIMCIS56303.2022.10017707

[26] Savoska, S., Ristevski, B. (2020). Towards implementation of big data concepts in a pharmaceutical company. Open Computer Science, 10(1): 343-356. https://doi.org/10.1515/comp-2020-0201

[27] Seddigh, M.R., Shokouhyar, S., Loghmani, F. (2023). Approaching towards sustainable supply chain under the spotlight of business intelligence. Annals of Operations Research, 324(1): 937-970. https://doi.org/10.1007/s10479-021-04509-y

[28] Barros, F., Rodrigues, B., Vieira, J., Portela, F. (2023). Pervasive real-time analytical framework—A case study on car parking monitoring. Information, 14(11): 584. https://doi.org/10.3390/info14110584

[29] Farhan, M.S., Youssef, A., Abdelhamid, L. (2024). A model for enhancing unstructured big data warehouse execution time. Big Data and Cognitive Computing, 8(2): 17. https://doi.org/10.3390/bdcc8020017

[30] Biagi, V., Russo, A. (2022). Data model cdesign to support data-driven IT governance implementation. Technologies, 10(5): 106. https://doi.org/10.3390/technologies10050106

[31] Housbane, S., Khoubila, A., Ajbal, K., Agoub, M., Battas, O., Othmani, M.B. (2020). Real-time monitoring system to manage mental healthcare emergency unit. Healthcare Informatics Research, 26(4): 344-350. https://doi.org/10.4258/hir.2020.26.4.344

[32] Ferdaous, J., Gouider, M.S. (2022). Large-scale system for social media data warehousing: The case of twitter-related drug abuse events integration. International Journal of Data Warehousing and Mining (IJDWM), 18(1): 1-18. https://doi.org/10.4018/IJDWM.290890

[33] Khairy, H.A., Baquero, A., Al-Romeedy, B.S. (2023). The effect of transactional leadership on organizational agility in tourism and hospitality businesses: The mediating roles of Organizational Trust and Ambidexterity. Sustainability, 15(19): 14337. https://doi.org/10.3390/su151914337

[34] Nunes, F., Alexandre, E., Gaspar, P.D. (2024). Implementing key performance indicators and designing dashboard solutions in an automotive components company: A case study. Administrative Sciences, 14(8): 175. https://doi.org/10.3390/admsci14080175

[35] Maman, Z.S., Chen, Y.J., Baghdadi, A., Lombardo, S., Cavuoto, L.A., Megahed, F.M. (2020). A data analytic framework for physical fatigue management using wearable sensors. Expert Systems with Applications, 155: 113405. https://doi.org/10.1016/j.eswa.2020.113405

[36] Harode, A., Ensafi, M., Thabet, W. (2022). Linking BIM to power BI and Hololens 2 to support facility management: A case study approach. Buildings, 12(6): 852. https://doi.org/10.3390/buildings12060852

[37] Zhang, T., Liu, S., Qiu, W., Lin, Z., Zhu, L., Zhao, D., Qian, M., Yang, L. (2020). KPI-based real-time situational awareness for power systems with a high proportion of renewable energy sources. CSEE Journal of Power and Energy Systems, 8(4): 1060-1073. https://doi.org/10.17775/CSEEJPES.2020.01530

[38] Kpiebaareh, M.Y., Wu, W.P., Agyemang, B., Haruna, C.R., Lawrence, T. (2022). A generic graph-based method for flexible aspect-opinion analysis of complex product customer feedback. Information, 13(3): 118. https://doi.org/10.3390/info13030118

[39] Munawar, G., Arsyad, Z., Hodijah, A. (2024). The development of business intelligence dashboard to measure the KPI of the study program (Case study: LAM Infokom instrument). AIP Conference Proceedings, 2952(1): 120004. https://doi.org/10.1063/5.0211908

[40] Vines, A., Bologa, A.R., Bostan, A.I. (2025). Enabling intelligent data modeling with AI for business intelligence and data warehousing: A data vault case study. Systems, 13(9): 811. https://doi.org/10.3390/systems13090811

[41] Massaro, A. (2022). Advanced control systems in industry 5.0 enabling process mining. Sensors, 22(22): 8677. https://doi.org/10.3390/s22228677

[42] Abdulameer, S.S., Ibrahim, Y.M. (2025). Leveraging Industry 4.0 technologies, AI-based supply chain analytics and circular economy practices for net-zero and sustainable supply chain performance: A moderated mediation model. Supply Chain Management: An International Journal, 30(6): 701-718. https://doi.org/10.1108/SCM-05-2025-0461

[43] Gaol, F.L., Abdillah, L., Matsuo, T. (2020). Adoption of business intelligence to support cost accounting based financial systems—Case study of XYZ company. Open Engineering, 11(1): 14-28. https://doi.org/10.1515/eng-2021-0002

[44] Al-khateeb, B.A.A. (2024). Business Intelligence (BI): A Critical strategy for university success and sustainability. International Journal of Asian Business and Information Management 15(1): 1-15. https://doi.org/10.4018/ijabim.340387

[45] Ajalkar, D.A., Waheed, S.A., Matheen, M.A., Gupta, P., Shinde, J.P. (2024). Futuristic business intelligence framework for start-ups. Journal of Autonomous Intelligence, 7(2): 1-15. https://doi.org/10.32629/jai.v7i2.960

[46] Ibrahim, N., Handayani, P.W. (2022). A systematic literature review of business intelligence framework for tourism organizations: Functions and issues. Interdisciplinary Journal of Information, Knowledge & Management, 17: 523-541. https://doi.org/10.28945/5025

[47] Landütama, J.F., Chowanda, A. (2023). Applied design thinking for kimball lifecycle to improve business intelligence dashboard usability. International Journal of Innovative Computing, Information and Control, 19(4): 1139-1152. https://doi.org/10.24507/ijicic.19.04.1139

[48] Ahmad, H., Hanandeh, R., Alazzawi, F.R.Y., Al-Daradkah, A., ElDmrat, A.A.T., Ghaith, Y.M., Darawsheh, S.R. (2023). The effects of big data, artificial intelligence, and business intelligence on e-learning and business performance: Evidence from Jordanian telecommunication firms. International Journal of Data & Network Science, 7(1). https://doi.org/10.5267/j.ijdns.2022.12.009

[49] Frempong, D., Akinboboye, O., Okoli, I., Afrihyia, E., et al. (2022). Real-time analytics dashboards for decision-making using Tableau in public sector and business intelligence applications. Journal of Frontiers in Multidisciplinary Research, 3(2): 65-80. https://doi.org/10.54660/.ijfmr.2022.3.2.65-80

[50] Khawaldeh, K., Alzghoul, A. (2024). Nexus of business intelligence capabilities, firm performance, firm agility, and knowledge-oriented leadership in the Jordanian high-tech sector. Problems and Perspectives in Management, 22(1): 115. https://doi.org/10.21511/ppm.22(1).2024.11

[51] Qhal, A., Mohammed, E. (2022). Role of business intelligence and knowledge management in solving business problems. Tehnički glasnik, 16(3): 371-378. https://doi.org/10.31803/tg-20220531145604

[52] A. Bruno, M. Menanno, C. Riccio, A. Caravella, M.M. Savino, A Business Intelligence based scorecards for supply chain performance analysis. In Proceedings of the Summer School Francesco Turco, AIDI - Italian Association of Industrial Operations Professors. https://www.scopus.com/inward/record.uri?eid=2-s2.0-105005404109&partnerID=40&md5=04f09d9f391942ccb9a2cef9c7dc6469. 

[53] Akter, M., Kudapa, S.P. (2024). A comparative analysis of artificial intelligence-integrated bi dashboards for real-time decision support in operations. International Journal of Scientific Interdisciplinary Research, 5(2): 158-191. https://doi.org/10.63125/47jjv310

[54] Fernando Medina, Q., Francisco Fariña, M., Castillo-Rojas, W. (2018). Data mart to obtain indicators of academic productivity in a university. Ingeniare, 26: 88-101.

[55] Salisu, I., Bin Mohd Sappri, M., Bin Omar, M.F. (2021). The adoption of business intelligence systems in small and medium enterprises in the healthcare sector: A systematic literature review. Cogent Business & Management, 8(1): 1935663. https://doi.org/10.1080/23311975.2021.1935663

[56] Ramalingam, S., Subramanian, M., Reddy, A.S., Tarakaramu, N., Khan, M.I., Abdullaev, S., Dhahbi, S. (2024). Exploring business intelligence applications in the healthcare industry: A comprehensive analysis. Egyptian Informatics Journal, 25: 100438. https://doi.org/10.1016/j.eij.2024.100438

[57] Shafa, H. (2025). Artificial intelligence-driven business intelligence models for enhancing decision-making in us enterprises. ASRC Procedia: Global Perspectives in Science and Scholarship, 1(01): 771-800. https://doi.org/10.63125/b8gmdc46

[58] Abraham, B.M., Jyothirmai, M.V., Sinha, P., Viñes, F., Singh, J.K., Illas, F. (2024). Catalysis in the digital age: Unlocking the power of data with machine learning. Wiley Interdisciplinary Reviews: Computational Molecular Science, 14(5): e1730. https://doi.org/10.1002/wcms.1730

[59] Ma, T., Xiao, F., Zhang, C., Zhang, J., Zhang, H., Xu, K., Luo, X. (2025). Digital twin for 3D interactive building operations: Integrating BIM, IoT-enabled building automation systems, AI, and mixed reality. Automation in Construction, 176: 106277. https://doi.org/10.1016/j.autcon.2025.106277

[60] Vallurupalli, V., Bose, I. (2018). Business intelligence for performance measurement: A case based analysis. Decision Support Systems, 111: 72-85. https://doi.org/10.1016/j.dss.2018.05.002

[61] Jindal, A., Kumar, N., Singh, M. (2020). A unified framework for big data acquisition, storage, and analytics for demand response management in smart cities. Future Generation Computer Systems, 108: 921-934. https://doi.org/10.1016/j.future.2018.02.039

[62] Kovacic, I., Schuetz, C.G., Neumayr, B., Schrefl, M. (2022). OLAP Patterns: A pattern-based approach to multidimensional data analysis. Data & Knowledge Engineering, 138: 101948. https://doi.org/10.1016/j.datak.2021.1019483

[63] Shahid, A., Nguyen, T.A.N., Kechadi, M.T. (2021). Big data warehouse for healthcare-sensitive data applications. Sensors, 21(7): 2353. https://doi.org/10.3390/s21072353

[64] Wang, J., Li, Z., Zhou, L. (2025). Research on application of electromagnetic environment data warehouse based on big data. International Journal of Data Warehousing and Mining (IJDWM), 21(1): 1-21. https://doi.org/10.4018/IJDWM.373715

[65] Panthong, R. (2024). Data warehouse system to support condition health care of elderly. Journal of Computer Science, 20(12): 1602-1609. https://doi.org/10.3844/jcssp.2024.1602.1609

[66] Tampubolon, P., Girsang, A.S. (2021). Classification of attacks through the type of protocol using data mining. Journal of System and Management Sciences, 11(2): 1-14. https://doi.org/10.33168/JSMS.2021.0201

[67] Sinaga, A.S.R.M., Putra, R.E., Girsang, A.S. (2022). Prediction measuring local coffee production and marketing relationships coffee with big data analysis support. Bulletin of Electrical Engineering and Informatics, 11(5): 2764-2772. https://doi.org/10.11591/eei.v11i5.4082

[68] Machfudiyanto, R.A., Latief, Y., Fitriani, R., Syifa, A. (2022). Development of a risk-control safety program as an architectural contractor guideline on flats project. International Journal of Safety and Security Engineering, 12(5): 603-608. https://doi.org/10.18280/ijsse.120508

[69] de Assis Vilela, F., Ciferri, R.R. (2021). A novel solution to perform real-time ETL process based on non-intrusive and reactive concepts. In 2021 International Conference on Computational Science and Computational Intelligence, Las Vegas, NV, USA, pp. 556-561. https://doi.org/10.1109/CSCI54926.2021.00158

[70] Mahmud, D., Ikbal, M.Z. (2022). The role of ETL (extract-transform-load) pipelines in scalable business intelligence: A comparative study of data integration tools. ASRC Procedia: Global Perspectives in Science and Scholarship, 2(1): 89-121. https://doi.org/10.63125/1spa6877

[71] Wang, Y., Zhang, L., Wei, C., Tang, Y. (2023). Joint optimization of resource allocation and computation offloading based on game coalition in C-V2X. Ad Hoc Networks, 150: 103266. https://doi.org/10.1016/j.adhoc.2023.103266

[72] Guo, F., Wang, Y., Deng, G., Liao, R., Yu, Z., Hu, S., Jin, B., Zhao, J., Xiao, D. (2021). Research and analysis on the use of 5G and big data in urban electric vehicle public charging networks. Journal of Physics: Conference Series, 1744(2): 022136. https://doi.org/10.1088/1742-6596/1744/2/022136

[73] Thayyib, P.V., Mamilla, R., Khan, M., Fatima, H., Asim, M., Anwar, I., Shamsudheen, M.K., Khan, M.A. (2023). State-of-the-art of artificial intelligence and big data analytics reviews in five different domains: A bibliometric summary. Sustainability, 15(5): 4026. https://doi.org/10.3390/su15054026

[74] Li, W., Chai, Y., Khan, F., Jan, S.R.U., Verma, S., Menon, V.G., Kavita, Li, X. (2021). A comprehensive survey on machine learning-based big data analytics for IoT-enabled smart healthcare system. Mobile Networks and Applications, 26(1): 234-252. https://doi.org/10.1007/s11036-020-01700-6

[75] Tan, N., Khan, M.I., Saleh, M.A. (2024). The intersection of big data and healthcare innovation: Millennial perspectives on precision medicine technology. Journal of Open Innovation: Technology, Market, and Complexity, 10(4): 100376. https://doi.org/10.1016/j.joitmc.2024.100376

[76] Parvathareddy, S., Yahya, A., Amuhaya, L., Samikannu, R., Suglo, R.S. (2025). A hybrid machine learning and optimization framework for energy forecasting and management. Results in Engineering, 26: 105425. https://doi.org/10.1016/j.rineng.2025.105425