New Enhancement Techniques for Optimizing Multimedia Visual Representations in Music Pedagogy

New Enhancement Techniques for Optimizing Multimedia Visual Representations in Music Pedagogy

Mengmeng Chen | Chuixiang Xiong*

School of Music and Dance, QuJing Normal University, Qujing 655011, China

School of Continuing Education, Hankou University, Wuhan 430212, China

Corresponding Author Email: 
Xcx37175809@163.com
Page: 
2131-2138
|
DOI: 
https://doi.org/10.18280/ts.400530
Received: 
20 May 2023
|
Revised: 
16 August 2023
|
Accepted: 
21 August 2023
|
Available online: 
30 October 2023
| Citation

© 2023 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

With the continuous advancement of modern educational technologies, there has been a growing interest in multimedia visual expression methods within the domain of music pedagogy. These methods aim to provide students with an intuitive and vivid learning environment, facilitating a more profound understanding of music's structure and emotions. However, despite the availability of various image enhancement techniques, many tend to focus only on certain image attributes, often neglecting a comprehensive perspective on image aesthetics and authenticity. Addressing this issue, innovative image enhancement techniques are introduced in this study: an adaptive Gamma correction method for luminance adjustment, a saturation correction method based on luminance components, and a multimedia image enhancement method founded on an improved CLAHE algorithm. These methods not only significantly elevate the visual effects of multimedia in music teaching but also offer substantial technical support for the modernization of music education.

Keywords: 

music pedagogy, multimedia visualization, image enhancement, gamma correction, CLAHE algorithm

1. Introduction

In the prevailing information era, the rapid advancement of technology and digital methodologies has seen multimedia technologies increasingly permeating various educational domains [1-8]. Notably, within the sphere of music pedagogy, visual expressions afforded by multimedia have been identified as a novel and highly compelling pedagogical tool. Such tools have facilitated students' understanding of music's essence and structure through vivid and intuitive imagery and audio [9, 10]. Nevertheless, the challenge faced by educators and technologists alike lies in the amalgamation of cutting-edge techniques with traditional music pedagogy to craft more efficient and profound teaching methodologies [11-13].

The application of multimedia image enhancement in music teaching is not sought for visual spectacle; instead, it aims for a more accurate conveyance of musical emotions, styles, and structures [14-19]. Through optimization of image luminosity, colour, and contrast, a deeper immersion into the world of music can be enabled, thereby augmenting students' perception and understanding [20, 21]. Consequently, the research and exploration into these techniques hold not only a profound significance in elevating the efficacy of music pedagogy but also actively propel the innovation and development in music education.

Despite the array of image enhancement methods currently available, many tend to concentrate solely on specific image attributes, often overlooking the holistic perception and authenticity of images [22-24]. For instance, some conventional enhancement techniques might excessively emphasize image contrast, leading to potential loss of image details or colour distortion [25, 26]. Moreover, in the face of the complex and ever-changing multimedia scenarios in music pedagogy, traditional methods often fall short, failing to meet the escalating pedagogical demands [27, 28].

Addressing these issues, innovative image enhancement techniques are introduced in this study. Firstly, an exploration into the adaptive Gamma correction method for luminance adjustment and a saturation correction method grounded on luminance components has been undertaken, aiming to elevate image luminance balance and colour authenticity. Secondly, a deep dive into the multimedia image enhancement technique rooted in an improved CLAHE algorithm was conducted, aiming to accentuate image details and augment its contrast. The deployment of these techniques not only substantially enhances the visual effects of multimedia in music teaching but also offers a more authentic and vivid learning environment, further advancing the modernization process of music education.

2. Luminance Adjustment and Saturation Correction in Multimedia Imagery

Multimedia imagery has served as a bridge in music pedagogy, transforming abstract musical knowledge into intuitive visual information, aiding students in gaining a deeper, more comprehensive grasp of music. When elucidating music theory and notation, animations and images depicting various notes, key signatures, and rests – their shapes, durations, and positions within scores – can facilitate a more intuitive understanding of the fundamental elements and structure of music. For students learning musical instruments, demonstrations of correct playing techniques and skills, such as finger positioning and blowing techniques, through videos or animations can expedite their learning process. Through images and videos, musicians, instruments, and performance scenes from various historical periods can also be showcased, assisting students in better comprehending the historical progression and cultural backdrop of music.

In music teaching, multimedia visual representations often form the core component, enabling students to understand the essence and structure of music in a more vivid and direct manner. Hence, to ensure students attain the optimum learning experience, the optimization of these visual representations proves crucial. Multimedia content used in music teaching often originates from diverse sources, such as old video recordings, digitized vintage photographs, and hand-drawn images. The quality, luminance, and saturation of these materials might vary, resulting in an overall instructional content that appears less cohesive and professional. By optimizing luminance and saturation, images can be made to appear more vibrant and colourful, thereby capturing students' attention and enhancing their learning interest and enthusiasm. Figure 1 provides the technical flowchart for the techniques presented in this study.

Figure 1. Flowchart of the proposed algorithm

2.1 Luminance adjustment

Figure 2. The adaptive Gamma function

Luminance adjustments in multimedia images are adaptively made in this study by considering the average luminosity of images. This ensures consistency in the light-dark relationship of images, making them appear more realistic and natural. Such adjustments are paramount for practical demonstrations in music pedagogy, such as instrument playing techniques or concert scenes. Different images might possess varying luminosity distributions and characteristics. Adapting luminance based on average luminosity means that luminance adjustments are tailored to the specific conditions of each image, rather than using fixed parameters or settings. Moreover, in certain multimedia content for music teaching, such as old videos or blurred images, the luminance might be uneven or excessively dim. Through adaptive luminance adjustments, the visibility of these images can be significantly enhanced, facilitating easier recognition and comprehension of the content by students. Figure 2 illustrates the adaptive Gamma function.

Initially, the image to be processed is loaded. If the image is in colour, it is converted to grayscale since luminance is typically associated with the grayscale values of an image. A summation of pixel values across the entire image is subsequently conducted. Dividing this sum by the number of pixels in the image yields the average luminosity of the image. Depending on specific application requirements or user preferences, a target luminance value is determined. Each pixel within the image is multiplied by the luminance adjustment coefficient derived from the previous step, ensuring that every pixel within the image undergoes luminance adjustment in the same proportion. After subjected to Gamma correction, the average luminance value, S, of the original multimedia image will be around 1/2, that is:

$S^{\varepsilon}=1 / 2$              (1)

Next, logarithmic calculations are carried out to get the adaptive ε value:

$\varepsilon=\log _S 1 / 2$              (2)

$\varepsilon=\frac{\log _{10} 1 / 2}{\log _{10} S}$              (3)

$\varepsilon=\frac{-3}{10 \log _{10} S}$              (4)

If the original image is in colour, the adjusted grayscale image can be reverted to the colour space. Finally, the adjusted image is saved or output.

2.2 Saturation correction

Within the context of music education, to accentuate certain pivotal content or pedagogical points, an emphasis on colour vibrancy is often requisite. For instance, while showcasing different parts of musical instruments or emphasizing the significance of certain musical notes, a pronounced colour vividness can aid students in rapidly pinpointing the crux of the matter. Simultaneously, due to myriad reasons, such as shooting conditions or equipment constraints, the original image's colours might appear rather subdued. Saturation correction can alleviate this issue, instilling a more dynamic feel into the imagery.

Traditional saturation correction is typically achieved through a linear transformation as illustrated in the following equation. Assuming the corrected saturation is denoted by A′, the original saturation by A, and the mathematical expectation of the probability of each grayscale level appearing in the current image by B, then there is:

$A^{\prime}=B+\frac{225-B}{225} \times A$              (5)

In the method proposed in this study, the luminance component is initially extracted from the image. The saturation adjustment coefficient is then computed, often as a function of the luminance component. For example, when luminance is high, the saturation adjustment coefficient might be increased, imbuing the colours with greater vibrancy. The calculated saturation adjustment coefficient is applied to each pixel of the image, modifying its saturation accordingly.

Assuming the original saturation is denoted by A and the corrected saturation component by Ay, with a transformation coefficient denoted by y, the proposed method sets the channel saturation as:

$A_U=A \times\left[1+\left(\frac{V_U-V}{V_U}\right) \times y\right]$              (6)

Assuming a pixel point is represented by (u, k), the average luminance and saturation within a 3×3 size range around this pixel point are represented by $C^{-} \Xi$ and $A^{-} \Xi(u, k)$, respectively, the luminance variance and saturation variance of this pixel point are denoted by ΘC(uk) and ΘA(uk), and the transformation coefficient is computed as:

$y(u, k)=\left\{\begin{array}{l}\frac{\sum_{u, k \in \Xi}\left(\begin{array}{l}\left|C(u, k)-\bar{C}_{\Xi}(u, k)\right| \\ \times\left|a(u, k)-\bar{a}_{\Xi}(u, k)\right|\end{array}\right)}{\sqrt{\Theta_a(u, k)}}, C_U \neq C \\ \frac{\sum_{u, k \in \Xi}\left(\left|a(u, k)-\bar{a}_{\Xi}(u, k)\right|\right)}{\sqrt{\Theta_a(u, k)}}, C_U=C\end{array}\right.$              (7)

Lastly, ensuring the adjusted saturation value remains within permissible bounds is vital to prevent colour distortion. The value of y alters with channel luminance variations, achieving a natural enhancement of image colours, making them appear more vibrant and lively, fitting the requirements of multimedia visual representations in music pedagogy.

3. Enhancement of Multimedia Images Based on the Improved Clahe Algorithm

Multimedia visual representations in music pedagogy play a pivotal role in contemporary education. Through an array of forms such as images, videos, and animations, visualization technologies can present musical knowledge and skills in a more intuitive and vivid manner, significantly enhancing students' learning interest and efficacy. To ensure the optimal presentation of such multimedia content, image enhancement techniques have become paramount.

The quality and detail of images in multimedia visual representations for music education are held to particularly high standards. When it comes to intricate musical scores, detailed displays of instruments, or lively demonstrations of musical scenarios, contrast and detail reproduction of images are crucial. Against this backdrop, the introduction of the multimedia image enhancement method based on the improved CLAHE algorithm holds pronounced significance and advantages.

Figure 3. Bilinear interpolation

Figure 3 illustrates the principle of bilinear interpolation in the traditional CLAHE algorithm. The conventional CLAHE algorithm enhances contrast by dividing the image into multiple small rectangular areas and equalizing the histogram within each small area. However, such a method might lead to excessive contrast in certain regions, resulting in an over-enhanced effect on the image. To address this issue, a minimum and maximum threshold level is proposed to be set on the grayscale level of each small rectangular area, defining a dynamic range for the grayscale level. Consequently, pixels exceeding this range can be clipped and redistributed, ensuring the enhancement effect appears more natural and uniform. Figure 4 presents the flowchart of the image enhancement algorithm proposed in this study.

Figure 4. Flowchart of the image enhancement algorithm

Initially, probability density functions of each small rectangular area of the image are calculated. Based on this probability density function, the mean and variance within the area are ascertained. These two statistical measures, reflecting the overall luminance and contrast of the image, provide crucial groundwork for subsequent processing. Assuming the expected value of the probability density function o(z) is represented by R(z) and defined as ω=R(z); and the variance of the probability density function o(z) is denoted by F(z) with the standard deviation represented by δ, the calculation formulas are:

$R(z)=\sum_{z=0}^{M-1} z o(z)$               (8)

$F(z)=R\left(z^2\right)-[R(z)]^2$               (9)

$\omega=R(z), \delta=\sqrt{F(z)}$               (10)

Utilizing the mean and variance obtained from the previous step, the minimum threshold of each small rectangular area is computed. This threshold helps define a dynamic range for the grayscale level, ensuring that the enhancement is neither excessive nor insufficient. Supposing the minimum threshold is YMIN and the maximum threshold is YMAX, the formulas are:

$Y_{M I N}=\sum_{z=0}^{\omega+1.65 \delta} o(z)$               (11)

$Y_{M A X}=\sum_{z=0}^{\omega+1.96 \delta} o(z)$               (12)

Based on the data from the preceding two steps, the grayscale levels for each small rectangular area are determined. This grayscale level is defined within a dynamic range; pixels beyond this range will be clipped, ensuring a more uniform contrast enhancement of the image. Assuming the dynamic range of the grayscale level is denoted by f, with its minimum and maximum values represented by AMIN and AMAX respectively, the calculation formulas are:

$A_{M I N}=\left\{j\left|\sum_{z=0}^j o(z)=Y_{M I N}\right|\right\}$               (13)

$A_{M A X}=\left\{j\left|\sum_{z=0}^j o(z)=Y_{M A X}\right|\right\}$               (14)

$f=A_{M A X}-A_{M N N}$               (15)

Pixels exceeding the dynamic range defined in the previous step undergo a clipping process. These clipped pixels are redistributed within the dynamic range, allowing for a natural enhancement of contrast and ensuring the retention of finer image details. Presuming the preset clipping threshold is CL, pixel value is G(z), and the grayscale level is M, the calculation formulas are:

$T O=\sum_{z=0}^{M-1}(M A X(G(z)-C L I T), 0)$               (16)

$G_p(z)=\left\{\begin{array}{l}G(z),\left(z>A_{M A X}, z<A_{M I N}\right) \\ G(z)+R_e,\left(A_{M N} \leq z \leq A_{M A X}\right)\end{array}\right.$               (17)

High clarity and intricate texture details in images are essential for multimedia visual representations in music education. This is particularly true since music teaching often involves elements like musical scores, instrument details, and demonstrations of playing techniques, where even minute details can profoundly influence students' learning experience and outcomes. For instance, a minute notation mark on a musical score might dictate the method of performance, and subtle structural changes in an instrument or its method of being played could influence the tonality. Hence, ensuring these details are vividly represented in multimedia materials is paramount. The focus of this study revolves around optimizing multimedia visual representations through image processing techniques. Given that traditional image enhancement methods might fail to retain or highlight some minute details fully, this work introduces a multi-scale detail enhancement algorithm to bolster the texture and detail information of the image.

Three distinct scales (small, medium, and large) are first chosen based on specific requirements of music education. The original image undergoes filtering operations using Gaussian filters, producing three Gaussian blurred images representing smoother versions of the image at different scales. Assuming the images for fine detail, intermediate detail, and coarse detail are represented by F1, F2, and F3 respectively, the formulas are:

$S_1=H_1 * U, S_2=H_2 * U, S_3=H_3 * U$               (18)

Differential operations involve simple subtraction of the corresponding Gaussian blurred image from the original, highlighting the discrepancies between the original and its smoothed version, thereby extracting details at that scale.

$F_1=U-S_1, F_2=S_1-S_2, F_3=S_2-S_3$               (19)

To obtain an image that retains macro-structural information while showcasing micro-texture and details, the detail images from the three scales need to be fused. Fusion can be achieved through a weighted average method, where weights can be adjusted based on specific needs in music education to ensure the emphasis on details at a particular scale. Assuming the fusion weights for different detail images are represented by μ1, μ2, and μ3 and SGN as the sign function, then the formula is:

$F^*=\left(1-\mu_1 \times S G N\left(F_1\right)\right) \times F_1+\mu_2 \times F_2+\mu_3 \times F_3$               (20)

Lastly, the fused detail enhancement image is merged with the original image to produce the final multi-scale detail-enhanced image.

4. Experimental Results and Analysis

Table 1 presents the results of various image enhancement methods under DV/BV evaluation. A notable improvement over the original image in the DV/BV values is observed when the HR method is employed. This implies that the HR method demonstrates certain effectiveness in image enhancement compared to the original image. Fluctuations in performance are noticed among the rows of data for the singular value equalization and CLAHE methods. Singular value equalization outperforms in some instances, while CLAHE excels in others. However, overall, both methods display better performance than both the original image and the HR method. The weighted HE method's effectiveness is found to be comparable to singular value equalization and CLAHE, albeit slightly lower in some cases. Regardless of comparison to the original image, HR, or other traditional image enhancement methods, the algorithm proposed in this study consistently exhibits the highest performance values on DV/BV evaluation. Across each row of data, the values for this study's algorithm significantly surpass those of other methods, clearly demonstrating its superior performance in image enhancement.

Table 2 displays the evaluation results of various image enhancement methods based on the energy gradient function. Improvements in the energy gradient for the HR method over the original image are not pronounced, though in certain data rows, the HR method slightly surpasses the original. Both singular value equalization and CLAHE exhibit notable enhancements over the original image and the HR method. In certain rows of data, singular value equalization performs marginally better than CLAHE. However, overall, significant advancements in energy gradient are achieved by both. The performance of the weighted HE method is closely aligned with that of singular value equalization and CLAHE, yet, in some scenarios, its energy gradient is observed to be slightly higher. Regardless of the method compared against, the algorithm introduced in this study consistently attains the highest values in the energy gradient function evaluation, distinctly outstripping all other methods. This re-emphasizes its superior capabilities in image enhancement.

Table 3 presents the evaluation results of various image enhancement methods based on information entropy. Information entropy serves as a metric to measure the information content of an image, with higher values indicating a richer amount of information, typically leading to improved visual effects. Compared to the original image, the CLAHE method achieved a slight increase in information entropy in most scenarios, suggesting its efficacy in enhancing the image's informational content. In contrast, the HR method's information entropy is observed to be slightly below the original in many cases. The singular value equalization method achieved relatively higher information entropy values across the majority of data rows, indicating its commendable performance in image enhancement. The performance of the weighted HE method aligns closely with that of singular value equalization, and even outperforms it in certain scenarios, showcasing its effectiveness in augmenting the image's informational content. The algorithm proposed in this study exhibits values for information entropy close to the original image, albeit slightly lower in some cases. This could be attributed to the fact that the method, while enhancing certain image attributes (such as contrast, luminance, or texture details), might compromise on some information content. Nevertheless, its performance in terms of information entropy is found to be superior when compared to the HR method.

Table 4 displays the evaluation results of various image enhancement methods based on local contrast. Local contrast serves as a measure to gauge the detail and texture in an image, with higher values signifying more pronounced details and textures, thus rendering them more distinguishable to observers. The HR method demonstrated medium levels of local contrast across most data rows. The singular value equalization method exhibited either low or medium performance in terms of local contrast across a majority of the data rows. In most scenarios, the CLAHE method showcased relatively high local contrast values, implying its effectiveness in accentuating the textures and details of images. The evaluation results of the weighted HE method for local contrast closely align with those of the CLAHE method, albeit slightly lower in certain data rows. The algorithm proposed in this study consistently exhibited the highest values of local contrast across almost all data rows, distinctly outperforming other methods.

Table 1. DV/BV evaluation results

Sample No.

Original Image

HR

Singular Value Equalization

CLAHE

Weighted HE

The Proposed Algorithm

1

44.91

63.24

256.32

321.05

369.24

884.12

2

65.23

88.24

789.24

1789.23

663.54

2152.03

3

38.24

61.37

331.02

231.02

169.38

278.14

4

55.32

78.36

625.31

568.21

518.23

1698.23

5

22.89

31.42

84.62

121.45

221.03

187.23

6

24.35

26.39

92.31

171.36

201.58

1192.36

7

62.38

73.26

884.23

1989.32

628.03

3245.18

8

68.39

71.02

925.36

2056.32

678.45

3456.29

Table 2. Evaluation results of energy gradient function

Sample No.

Original Image

HR

Singular Value Equalization

CLAHE

Weighted HE

The Proposed Algorithm

1

11.20

11.23

16.32

17.89

21.03

54.78

2

25.14

23.48

33.58

36.25

28.63

72.13

3

6.89

8.23

15.37

12.45

11.45

45.36

4

21.42

21.42

34.69

32.15

31.59

83.12

5

9.58

11.35

21.45

17.85

18.96

72.14

6

12.14

11.63

22.39

21.03

22.36

66.33

7

14.23

15.39

27.56

27.45

45.86

88.23

8

21.58

21.85

32.69

32.56

33.26

78.96

Table 3. Evaluation results of information entropy

Sample No.

Original Image

HR

Singular Value Equalization

CLAHE

Weighted HE

The Proposed Algorithm

1

21.34

22.04

18.97

21.45

22.13

21.17

2

23.04

23.18

21.46

23.74

22.45

22.36

3

18.96

21.36

18.36

22.39

22.69

21.45

4

22.33

22.48

22.39

23.72

23.57

22.68

5

22.41

22.36

22.14

23.16

23.56

22.45

6

21.06

22.46

21.08

22.38

23.15

21.48

7

22.39

21.06

22.46

21.14

23.48

22.47

8

23.85

21.39

22.38

21.75

22.47

21.03

Table 4. Evaluation results of local contrast

Sample No.

HR

Singular Value Equalization

CLAHE

Weighted HE

The Proposed Algorithm

1

1.48

1.29

1.78

0.68

2.14

2

2.68

2.38

3.46

3.21

3.69

3

1.78

1.71

2.21

2.13

2.89

4

2.29

1.89

2.89

2.59

3.24

5

1.14

0.85

1.56

1.31

1.89

6

1.21

1.42

1.58

1.78

3.14

7

0.92

1.59

1.89

1.89

2.28

8

2.17

2.14

2.78

2.14

3.29

Figure 5. Histogram of equalization effect

In Figure 5, the first histogram (depicted in blue) represents the histogram of the original image, with its distribution noticeably presenting a central peak value, approximately around 150, gradually declining on either side. Such a distribution indicates that the image contains the highest number of pixels at the central grey level, with the quantity of pixels at lower and higher grey levels diminishing progressively. The second histogram (illustrated in red) displays the histogram post-enhancement by a certain image enhancement algorithm, exhibiting a comparatively uniform distribution devoid of a distinct central peak. This implies that the image's pixels are fairly evenly distributed across all grey levels. Consequently, the following conclusion can be drawn: The histogram equalization was effectively achieved by the algorithm introduced in this study, successfully amplifying the image's contrast. These results further affirm the efficiency and efficacy of the algorithm proposed in this research within the domain of image enhancement.

In Figure 6, the image prior to processing (on the left) demonstrates a reduced local contrast. Particularly, the distinction between light and shadow in the figures on stage and the background appears faint. This results in an image that appears somewhat dim and lacks vibrancy. In contrast, the post-processed image (on the right) reveals a notable enhancement in local contrast. Notably, the figures on stage, including their clothing, facial features, and body contours, are more pronounced and brighter. The details in the background, especially the stage lighting and equipment, have become more distinct. Consequently, it is deduced that the algorithm introduced in this study exhibits commendable outcomes in real-world image processing, successfully amplifying the image's local contrast, rendering it brighter and clearer. A distinct enhancement in image detail is evident when comparing the images before and after processing, resulting in a fuller and more dynamic overall visual effect. This further affirms the efficacy and practicality of the algorithm proposed in this study.

Figure 6. Enhancement of local contrast by the proposed algorithm

5. Conclusion

A series of innovative image enhancement techniques have been introduced in this study, including the adaptive Gamma correction method for luminance adjustment and the saturation correction method based on luminance components, as well as multimedia image enhancement techniques rooted in an improved CLAHE algorithm. The primary focus of this research lies in elevating the local contrast of images. Enhancing contrast is a pivotal phase in image processing, capable of augmenting image details and improving visual effects. A novel algorithm is proposed in this work, which was compared against existing methods such as HR, singular value equalization, CLAHE, and weighted HE. Through tabulated data, it was observed that the algorithm introduced in this study outperforms other methods in terms of elevating local contrast. Histogram results depict commendable histogram equalization effects yielded by the proposed algorithm. By comparing images before and after processing, the merits of the introduced algorithm in enhancing local contrast, detailing, and overall visual effects were clearly discernible.

The research presented herein offers an effective method to the domain of image processing, particularly in the enhancement of local contrast. Relative to other extant methods, the algorithm introduced in this study demonstrated superior performance during experiments. Histogram and image comparisons further corroborated the efficacy of this algorithm. In summation, this work provides a practical and efficient tool for image processing, harboring extensive potential applications.

  References

[1] Zheng, X. (2022). Research on the whole teaching of vocal music course in university music performance major based on multimedia technology. Scientific Programming, 2022: 7599969. https://doi.org/10.1155/2022/7599969

[2] Wang, D. (2022). Analysis of multimedia teaching path of popular music based on multiple intelligence teaching mode. Advances in Multimedia, 2022: 7166569. https://doi.org/10.1155/2022/7166569

[3] Feng, J., Xiao, Y. (2022). Application of multimedia computer-aided instruction in music teaching in universities. Computer-Aided Design and Applications, 19(S7): 1-11. https://doi.org/10.14733/cadaps.2022.S7.1-11

[4] Aman, A., Kasmi, M., Ratnawati, Iskandar, A., Zam, W., Mustika, N., Laswi, A.S., Hidayati, W., Akbar Pandaka, A.U. (2023). The virtual tour panorama as a guide and education media of the historic objects at Datu Luwu Palace. Ingénierie des Systèmes d’Information, 28(2): 425-432. https://doi.org/10.18280/isi.280218

[5] Huang, H., Hsin, C.T. (2023). Environmental literacy education and sustainable development in schools based on teaching effectiveness. International Journal of Sustainable Development and Planning, 18(5): 1639-1648. https://doi.org/10.18280/ijsdp.180535

[6] Budiarti, M., Ritonga, M., Rahmawati, Yasmadi, Julhadi, Zulmuqim. (2022). Padlet as a LMS platform in Arabic learning in higher education. Ingénierie des Systèmes d’Information, 27(4): 659-664. https://doi.org/10.18280/isi.270417

[7] Septinaningrum, Hakam, K.A., Setiawan, W., Agustin, M. (2022). Developing of augmented reality media containing Grebeg Pancasila for character learning in elementary school. Ingénierie des Systèmes d’Information, 27(2): 243-253. https://doi.org/10.18280/isi.270208

[8] Sukmawati, F., Santosa, E.B., Rejekiningsih, T. (2023). Design of virtual reality zoos through Internet of Things (IoT) for student learning about wild animals. Revue d'Intelligence Artificielle, 37(2): 483-492. https://doi.org/10.18280/ria.370225

[9] Sun, B. (2023). The design and development of network multimedia music teaching based on multiple linear regression algorithm. Applied Mathematics and Nonlinear Sciences. https://doi.org/10.2478/amns.2023.1.00148

[10] Ma, X. (2021). Analysis on the application of multimedia-assisted music teaching based on AI technology. Advances in Multimedia, 2021: 5728595. https://doi.org/10.1155/2021/5728595

[11] Ma, H. (2022). Research on multimedia music teaching based on artificial intelligence. Computational Intelligence and Neuroscience, 2022: 9730609. https://doi.org/10.1155/2022/9730609

[12] Zhu, X. (2021). Research on multimedia technology in music teaching. In 2021 2nd International Conference on Information Science and Education (ICISE-IE), Chongqing, China, pp. 1057-1060. https://doi.org/10.1109/ICISE-IE53922.2021.00240

[13] Chen, Y. (2021). The application of network multimedia technology in vocal music teaching. In 2021 4th International Conference on Information Systems and Computer Aided Education, New York, NY, United States, pp. 1560-1564. https://doi.org/10.1145/3482632.3483198

[14] Wu, Y., Qi, J. (2022). Application of image processing variation model based on network control robot image transmission and processing system in multimedia enhancement technology. Journal of Robotics, 2022: 6991983. https://doi.org/10.1155/2022/6991983

[15] Gao, Y., Chen, M., Du, S., Feng, G. (2022). Application of multimedia semantic extraction method in fast image enhancement control. Journal of Control Science and Engineering, 2022: 2282217. https://doi.org/10.1155/2022/2282217

[16] Deva Shahila, D.F., Krishnaveni, S.H., Stephen, V. (2021). Soft computing-based non-linear discriminate classifier for multimedia image quality enhancement. International Journal of Computers and Applications, 43(7): 674-683. https://doi.org/10.1080/1206212X.2019.1625152

[17] Fang, Y., Li, H., Li, X. (2013). Lifetime enhancement techniques for PCM-based image buffer in multimedia applications. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 22(6): 1450-1455. https://doi.org/10.1109/TVLSI.2013.2266668

[18] Zhang, X., Ma, H., Zhao, H., Wang, R. (2022). Optical character recognition of electrical equipment nameplate with contrast enhancement. In 2022 Power System and Green Energy Conference (PSGEC), Shanghai, China, pp. 1062-1066. https://doi.org/10.1109/PSGEC54663.2022.9880966

[19] Zhang, L., Lu, Y., Li, J., Chen, F., Lu, G., Zhang, D. (2023). Deep adaptive hiding network for image hiding using attentive frequency extraction and gradual depth extraction. Neural Computing and Applications, 35: 10909-10927. https://doi.org/10.1007/s00521-023-08274-w

[20] Klíma, M., Pazderák, J., Fliegel, K. (2007). Examples of subjective image quality enhancement in multimedia. Applications of Digital Image Processing XXX, 6696: 601-609. https://doi.org/10.1117/12.768864

[21] Wang, J., Huang, L., Zhang, Y., Ni, J., Lin, L. (2021). Algorithm for the detection of a low complexity contrast enhanced image source. Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 48(1): 96-106.

[22] Zou, H., Yang, P., Ni, R., Zhao, Y. (2021). Anti-forensics of image contrast enhancement based on generative adversarial network. Security and Communication Networks, 2021: 6663486. https://doi.org/10.1155/2021/6663486

[23] Li, X., Zhang, H., Pan, J., Li, Q., Zou, G., Gao, M. (2022). Low-light image enhancement based on variational retinex model. In Thirteenth International Conference on Signal Processing Systems (ICSPS 2021), 12171: 49-55. https://doi.org/10.1117/12.2631428

[24] Khan, R., Mehmood, A., Akbar, S., Zheng, Z. (2023). Underwater image enhancement with an adaptive self supervised network. In 2023 IEEE International Conference on Multimedia and Expo (ICME), Brisbane, Australia, pp. 1355-1360. https://doi.org/10.1109/ICME55011.2023.00235

[25] Pan, L. (2022). Sports dance intelligent training correction system based on multimedia image action real-time acquisition algorithm. In 2022 3rd International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, pp. 1528-1531. https://doi.org/10.1109/ICOSEC54921.2022.9952152

[26] Zhu, Z., Xu, C. (2017). Organizing photographs with geospatial and image semantics. Multimedia Systems, 23: 53-61. https://doi.org/10.1007/s00530-014-0426-5

[27] Zheng, R., An, S. (2023). Digital art design and media practice integrating CAD and virtual reality technology. Computer-Aided Design and Applications, 20(S13): 86-97. https://doi.org/10.14733/cadaps.2023.S13.86-97

[28] Ding, L., Wang, P., Huang, H. (2021). Unified quality assessment of natural and screen content images via adaptive weighting on double scales. Signal Processing: Image Communication, 99: 116446. https://doi.org/10.1016/j.image.2021.116446