Accepted Papers


Vegetation Typification Integrated With Time Series Using Google Earth Engine

K.Rohith1, T.Pranoom1, V.Hari Vamsi1, G.JayaLakshmi4, 1Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, India, 2VelagapudiRamakrishna Siddhartha Engineering College, Vijayawada,Andhra Pradesh, India.

ABSTRACT

Using the Google Earth Engine platform, we surveyed all vegetation types using information on remote areas. Specifically, we use Landsat images and Sentinel2A data for conversion. Our aim is to improve the quality of vegetation by typification of vegetation using the various capabilities of the Landsat images. We use state-of-art image processing and machine learning algorithms to accurately classify different plant species in selected study areas. We also track temporal changes in vegetation using Sentinel2A imagery, making it possible to analyze land cover changes over time. Our approach to facility monitoring and change detection is broad because it combines the unique and rich nature of Sentinel2A. change the world. Where we used the starch based mechanism for the plant based classification scientifically and then make them to classified (ndvi) index ranges.

KEYWORDS

Vegetation types, Remote sensing techniques, Multispectral, capabilities, Categorization, Land management decisions, Robust methodology.


Heart Disease Prediction Using Data Mining Classification Algorithms

Deepanshu Sharma and Dr. Siddhartha Chauhan, Department of Computer Science and Engineering, NIT Hamirpur, H.P., India.

ABSTRACT

Heart diseases, also referred to as "cardiovascular diseases," are a group of disorders that affect the heart. This illness can cause a heart attack, stroke, and other symptoms. After examining a few research papers on the subject, it became clear that the majority of them used a single machine learning algorithm to predict heart disease. A few of them state that they are unable to enhance their models performance through optimization techniques. As a result of these findings, they encountered some difficulties in effectively predicting heart disease using their suggested method. In an earlier study PCA was also used, but it failed to provide considerable accuracy for such a sensitive research area, i.e., medical diagnosis. Data for this method was gathered from the "Heart Disease UCI" UCI repository, which was accessible on Kaggle. Working upon the given dataset we used various dimensionality reduction techniques, using various classifiers and found out their effectiveness. Thus, we were able to get considerably higher accuracy (98%) by using certain techniques to de-noise data (checking correlations, outliers, removing them etc.), using the MLP classifier.

KEYWORDS

PCA, MLP, cardiovascular diseases, ML, Data mining.


Multi-faceted Question Complexity Estimation Targeting Topic Domain-specificity

Sujay R1, Suki Perumal1, Yash Nagraj1, Anushka Ghei2 and Srinivas K S1, 1Department of CSE(AI & ML), PES University, Karnataka, India, 2Department of CSE, PES University, Karnataka, India

ABSTRACT

Question difficulty estimation remains a multifaceted challenge in educational and assessment settings. Traditional approaches often focus on surface-level linguistic features or learner comprehension levels, neglecting the intricate interplay of factors contributing to question complexity. This paper presents a novel framework for domain-specific question difficulty estimation, leveraging a suite of NLP techniques and knowledge graph analysis. We introduce four key parameters: Topic Retrieval Cost, Topic Salience, Topic Coherence, and Topic Superficiality, each capturing a distinct facet of question complexity within a given subject domain. These parameters are operationalized through topic modelling, knowledge graph analysis, and information retrieval techniques. A model trained on these features demonstrates the efficacy of our approach in predicting question difficulty. By operationalizing these parameters, our framework offers a novel approach to question complexity estimation, paving the way for more effective question generation, assessment design, and adaptive learning systems across diverse academic disciplines.

KEYWORDS

Question difficulty estimation, knowledge graph analysis, BERT, Domain specific metrics, Topic modelling, Natural Language Processing, Question Answering, Cognitive Load, Learning Analytics.


Resume Analyzer

Vinayak Subray Hegde and Premalatha H M, Department of Computer Applications, PES University, Bengaluru.

ABSTRACT

The “Resume Analyzer” is the advanced web application which provides the solutions for both Recruiters and Applicants by using the Natural Language Processing (NLP) technology. Its main motto is analysing the uploaded resume and providing the prediction, suggestions or advice to the both Job seekers and recruiters. For candidates they’ll upload the resume in pdf format, and the web application provides the basic information, experience level, predicted job role, existing and recommended skills, course recommendation according to the predicted job role, YouTube links for interview and resume tips and ideas. And for recruiters it’ll analyse the resume and provides the basic information, existing and recommended skills, parsed information of whoever using the tool (for better recruiting process) and downloadable parsed information.

KEYWORDS

Recruitment, Resume Analysis, Natural Language Processing (NLP), Candidate Selection, Career Development, User-Friendly Experience.


Harnessing Machine Learning and Blockchain for Supply Chain Innovation: a Comprehensive Exploration of Individual Applications

Dr. P. Venkateswara rao, A. Bhuvana, M. Varshitha, T. Bhavishya, Department of Computer Science and Engineering, KL University, Vaddeswaram, Guntur

ABSTRACT

Blockchain, hailed as a groundbreaking technology, provides secure solutions in various fields. Its distributed database foundation eliminates single points of failure, resulting in time and cost savings. The evolving field of Machine Learning shows promising research prospects through its synergy with Blockchain. The decentralized nature of Blockchain, combined with the vast data in Machine Learning, opens up extensive exploration possibilities [1] positioned as a leader in shaping the internet, Blockchain offers a decentralized, secure, and auditable framework for data alteration and authentication, removing the need for intermediaries. The stability and privacy of Blockchains decentralized database are maintained through a consensus process that ensures data protection and validity [2].Through this exploration, we aim to highlight the various applications of these technologies and their impacts on optimizing supply chain processes.

KEYWORDS

decentralized architecture, predictive analytics, data augmentation, inventory management, shared ledger, data integrity.


A Deep Learning-based Phishing Detection System Using CNN, RNN, and Bilstm

Lakksh Tyagi, Dept. of Computer Science Vellore Institute Of Technology Vellore, India

ABSTRACT

URL phishing, a type of cyberattack using deceptive URLs and emails, aims to exploit user trust in electronic communication. This research explores the integration of Machine Learning (ML) and Deep Learning (DL) techniques within Artificial Intelligence (AI) to enhance cybersecurity. Unlike traditional ML methods that rely on human expertise for feature extraction, DL models streamline detection and classification in a unified phase, minimizing the need for manual engineering. The study focuses on three cutting-edge deep learning techniques—Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and Bidirectional Long Short-Term Memory (Bi-LSTM)—for predicting phishing URLs. A novel approach leveraging these models is proposed, emphasizing the Bi-LSTMs superior performance in capturing both past and future contextual information. The comprehensive evaluation of these models highlights the Bi-LSTM as the preeminent performer in phishing detection, offering significant real-time applications in cybersecurity.

KEYWORDS

URL phishing, Deep Learning, Convolutional Neural Networks, Phishing detection, Neural networks.


Assessing Human Impact on Air Quality with Bayesian Networks and IDW Interpolation

Dr Hema Durairaj1 and L Priya Dharshini2, 1Senior Data Scientist, Publicis Sapient Pvt. Ltd., Bengaluru, KA, India, 2Postgraduate Student, Lady Doak College, Madurai, TN, India

ABSTRACT

As the explosion of the human population happens globally, meeting the demands for livelihood should also involve considerations for sustainability. Though there are several causes of global warming, air pollution makes a tremendous contribution to it. The Air Quality Index (AQI) measures how clean or polluted the air is in specific areas based on six major pollutants such as sulfur dioxide (SO2), nitrogen dioxide (NO2), ground-level ozone (O3), carbon monoxide (CO), and particulate matter (PM2.5 and PM10). There are six levels in the AQI, as "good," "satisfactory," "moderate," "poor," and "severe," that validate the score between 0-500. The implicit factor that affects the AQI is human movement within the environment. This research work involves real-time datasets collected from the TNPCB (Tamil Nadu Pollution Control Board) regarding Madurai’s AQI at three stations collected for the year 2021(during COVID-19 period). The Bayesian network exhibits the causal relationship between human movement and the Air Quality Index through probabilistic modelling. An IDW Interpolation chart is also visualized to conceptualize the human intervention (NIL, Partial and Complete) for the AQI value obtained in 3 stations.

KEYWORDS

Air Quality Index, Bayes Theorem, Bayesian Network, IDW Interpolation.


Aws Pricing Structure From Consumers Perspective

Vishal Karpe, Sahil Prajapati and Dr. Meetu kandpal, Lok Jagruti University, Ahmedabad,India

ABSTRACT

This study explores the nuances of pricing structures for Amazon Web Services (AWS), with a particular emphasis on comprehending the different models and cost control mechanisms that AWS offers. AWS has a variety of price plans designed to accommodate a wide range of user requirements, including an AWS Free Tier free trial period. Important pricing methods like pay-as-you-go, save when you commit, and pay less by using more are examined in this research, along with their consequences for users AWS investment strategy. The article offers insights into the relative advantages of AWS pricing structures through illustrations and a comparison analysis with Microsoft Azure. In addition, it looks at key cost control tools that enable users to track and maximize their AWS spending, such as the Amazon Price Analyzer, AWS Budgets, and Amazon Trusted Advisor. This document is to help customers maximize cost efficiency and resource usage in their cloud computing undertakings by offering a thorough overview of AWS pricing and cost management tactics.

KEYWORDS

Amazon Web Services (AWS), Pricing structures, Cost management tools, Comparative analysis, Cloud computing.


Proposing an Adaptive System for Enhancing the Efficiency of Low-power and Lossy Iot Networks

Anil Behal1, Lovejit Singh1, Manjeet Singh2, 1Department of CSE, Chandigarh university, 2Department of CSE, Sharda University

ABSTRACT

In the World of IoT, LLN networks are very lossy in nature and working on the batteries , these networks contain very less power in it. To increase the effiecency of RPL ,we are proposing new objective function which can work better than the existing objective functions like of0 and mrhof in terms of increased convergence time and decreased latency.


Artificial Intelligence

Disha Sharma, BCA, School of engineering and technology, Sushant University, Gurgaon, Haryana

ABSTRACT

Artificial Intelligence (AI) has evolved from a conceptual notion to a transformative force reshaping various aspects of human life. This paper delves into the multifaceted nature of AI, exploring its types, core technologies, and diverse applications across sectors such as healthcare, finance, transportation, and education. In this, we will discuss the benefits AI brings in enhancing efficiency, decision-making, and innovation, while also addressing the ethical concerns, job displacement, bias, and security issues it raises. Furthermore, the paper examines current research trends, regulatory landscapes, and the future trajectory of AI. Through this comprehensive analysis, we aim to provide a nuanced understanding of AIs impact on society and its potential to shape the future.

KEYWORDS

Artificial Intelligence (AI), Machine Learning, Deep Learning, Natural Language Processing (NLP), Robotics, Computer Vision, AI Applications, Ethical Concerns, AI Regulation, Future of AI.


Status and Challenges of Agricultural Entrepreneurship: Relevance in Covid-19 Pandemic

K. Sujatha1, NPG. Bhavani2, R.S. Ponmagal3, 1Professor, EEE Department, Dr. MGR Educational and Research Institute, Chennai, Tamil Nadu, India., 2Associate Professor, ECE Department, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamil Nadu, India., 3Associate Professor, CSE Department, SRM Institute of Science and Technology, Kattankulathur, Kanchipuram, Tamil Nadu, India.

ABSTRACT

Global pandemic COVID-19 severely impacted the agricultural economies of all countries, including India. The paper covers the current status of agribusiness in India and how it has emerged as a major indicator for growth and development during the pandemic. In India, MANAGE has trained 72,806 agri-graduates in agricultural entrepreneurship, and as of December 2020, there were 30,583 (42%) active agricultural ventures around the world. Majority of farmers have lost their markets as a result of lockdown, and due to travel limitations, lack of training and consulting services resulted in crop loss, high produce prices, and labour scarcity. AgriBazaar, Harvesting Farmer Network, Agricx Lab, CropIn, Bigbasket, Agrostar, Sickle Innovations, Agrirain, Farmguide, and PayAgri are few Agri-startups that arose in India during the pandemic to address the issues that farmers faced. Reorienting current agriculture towards agribusiness within the existing opportunities has the potential to significantly alter the lives farmers and its stakeholders as agricultural entrepreneurship has demonstrated the path to farmer growth and sustainability even during the pandemic.

KEYWORDS

COVID-19, Agripreneurship, Agri-startups, E-extension, E-Commerce, Entrepreneurship.


Analysis About the Implementation of Efficient Authentication Methodology in Sanctioning of Loan and Tender Using Block Chain Technology

Saranyah V, N Suganthi, Department of Computer Science and Engineering,Kumaraguru College of Technology, Coimbatore, India

ABSTRACT

The conventional method of tender sanctioning involves a process whereby tenders are awarded to companies that offer the most efficient and economical solutions according to the requirements of the government or the providing organization. However, the selection criteria for these companies are often shrouded in secrecy, lacking transparency in the allocation of tenders to specific bidding parties. To address this issue, we propose a blockchain-based authentication scheme incorporating different consensus mechanisms at various stages of the tender sanctioning process. This approach ensures a secure and transparent method of tender sanctioning. Similarly, we have implemented our concept in another aspect of bank operations, specifically in the sanctioning of loans to eligible customers applying for bank loans. By utilizing our proposed method, we establish a transparent process for loan sanctioning within the bank. Furthermore, we conduct a comparative analysis to evaluate the effectiveness of our proposed method across two distinct applications. This analysis involves comparing the efficacy of various consensus mechanisms, such as Proof of Work and Proof of Authority, in both scenarios.

KEYWORDS

Consortium block Chain, Consensus, Proof of Work (POW), Proof of Authority (PoA), confidentiality, banking, tendering.


Speech Recognition for Small-scale Datasets Using Deep Learning Models – a Comparative Study

Meena Ugale, Sumit Desai, Harshita Gupta, Saquib Khan, Aditya Pole, Department of Information Technology, Xavier Institute of Engineering, Mumbai, India

ABSTRACT

Speech recognition is a crucial technology that has the potential to revolutionize the way we interact with machines. However, developing speech recognition models often requires large amounts of data, which can be time-consuming and expensive to collect. On the other hand, small-scale datasets are more accessible and can be used to train models that are just as effective as those trained on larger datasets. This research carried out the use of various deep learning models like the CNN_RNN, RESNETMOBILE_VGG, and various hybrid combinations of RESNET50, RESNETMOBILE_VGG on small-scale datasets and this research proposed two efficient models namely CNN_RNN, RESNETMOBILE_VGG. This research compared seven deep learning models for speech recognition on small datasets. CNN_RNN and RESNETMOBILE_VGG models achieved the best performance. These models effectively captured features and showcased good results for all evaluation metrics. All models performed well on small datasets, demonstrating the feasibility of lip reading with limited data. Our findings revealed that the proposed CNN_RNN and RESNETMOBILE_VGG models outperform others like CNN and VGG. These models proved adept at learning from limited data, capturing crucial features and sequences for accurate lip movement interpretation. Also analysed the performance of various deep learning models on small-scale, custom lip-reading datasets. Proposed CNN_RNN and RESNETMOBILE_VGG models for efficient and accurate lip reading.

KEYWORDS

Deep Learning, Small Scale Dataset, Visual Speech Recognition, CNN_RNN, RESNETMOBILE_VGG.


Factors Impacting Response Latency in Remote Request Service Systems

Vakhid Zakirov and Eldor Abdullaev, Department of Radio electronic devices and systems, Tashkent State Transport University, Tashkent, Uzbekistan

ABSTRACT

The extensive usage of remote service systems is responsible for the rapid growth in the number of users. As a result, the time required to serve user requests increases for a variety of reasons. This necessitates an examination of the impact of these elements and the development of techniques to eradicate them, as well as the enhancement of methods for lowering service time. This page includes studies on the elements that influence service time. In doing so, researchers investigated the impact of server CPU and RAM performance, network speed performance, queue length, software code structure, client-server communication distance, and web server capabilities on service time.

KEYWORDS

service time, CPU, RAM, HTTP, network capacity, queue, geographic location, code optimization, web server.


Unseen Power of Information Assurance Over Information Security

Mouanda, guy, Computer Science and Informatics Oakland University, Michigan USA

ABSTRACT

Information systems and data are necessary resources for several companies and individuals, but they likewise encounter numerous risks and dangers that can terms their protection and value. Information security and information assurance are two connected expressions of protecting the confidentiality, integrity, and availability of information systems and data. InformationSecurity relates to the processes and methods that block unlawful entry, reform, or expose of data, in contrast information assurance covers the expansive aspirations of make certain that data is responsible, consistent, and flexible.This paper leads the primary models, rules, and challenges of Informationsecurity and information assurance, examines some of the top methods and principles, guidelines that can aid reach them and theninvestigate the modification in prominence for information security from being obscured in the Information Technology field to a liability pretending to be in the middle resolving all technology breaches around the world.This paper weighs the various controls and how an Information Assurance can be used to spotlight the security problems by focusing on human resource assets as well as the technology. Finally, demonstrates how Information Assurance must be considered above others technology pretending to secure the information.

KEYWORDS

Risk Assessment, Securities Models, Authentication, Access Control, Network Security, Cryptography, Policies, Software Security. Information assurance, Information security.


An Accurate Data Analysis Across a Secure Cloud Pipeline by Comparing Advanced Encryption Standard and Fernet

Gowarappagari Ganesh, Dhanalakshmi R, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, Tamil Nadu, India. Pincode: 602105.

ABSTRACT

The major aim of this research is to improve the accuracy of data analysis across a cloud pipeline using Advanced Encryption Standard algorithm compared with the Fernet algorithm. In a comparative analysis of Advanced Encryption Standard and FERNET encryption methods, the data was divided into training and testing sets using various splits. The G-power statistical test was employed with specific parameters, including a significance level (α) of 0.05, a power (β) of 0.2, and a desired effect size (power) of 0.8. The dataset consisted of 1110 samples. Group 1 included 20 iterations of Advanced Encryption Standard encryption, while Group 2 contained 20 iterations of FERNET encryption, totaling 40 iterations were used. The results revealed that Advanced Encryption Standard (96.73%) exhibited a statistically significant increase in accuracy compared to FERNET (91.59%) with a two-tailed significance level denoted as 0.000, which was found to be greater than 0.05. These statistical values were computed using ClinCalc for the analysis. Advanced Encryption Standard achieved a higher accuracy (96.73%) than Fernet (91.59%) with a significance value of 0.000, where a two-tailed test was conducted. The independent sample t-test yielded a statistically significant result. Advanced Encryption Standard algorithm proves that the analysis of data across a cloud pipeline is good over the Fernet algorithm. The Advanced Encryption Standard Will give the more accuracy rate.


Security Concerns in Iot Light Bulbs: Investigating Covert Channels

Janvi Panwar and Ravisha Rohilla, Indira Gandhi Delhi Technical University for Women, India

ABSTRACT

The proliferation of Internet of Things (IoT) devices has raised significant concerns regarding their security vulnerabilities. This paper explores the security risks associated with smart light systems, focusing on covert communication channels. Drawing upon previous research highlighting vulnerabilities in communication protocols and encryption flaws, the study investigates the potential for exploiting smart light systems for covert data transmission. Specifically, the paper replicates and analyzes an attack method introduced by Ronen and Shamir, which utilizes the Philips Hue White lighting system to create a covert channel through visible light communication (VLC). Experimental results demonstrate the feasibility of transmitting data covertly through subtle variations in brightness levels, leveraging the inherent functionality of smart light bulbs. Despite limitations imposed by device constraints and communication protocols, the study underscores the need for heightened awareness and security measures in IoT environments. Ultimately, the findings emphasize the importance of implementing robust security practices and exercising caution when deploying networked IoT devices in sensitive environments.

KEYWORDS

Internet of Things (IoT), Security vulnerabilities, Smart light systems, ZigBee Light Link (ZLL) protocol, Denial-of-Service (DoS) attacks, Firmware manipulation, Covert communication channels, Visible light communication (VLC), Data exfiltration, Air-gapped networks, Smart LED bulbs, PWM modulation, Orthogonal Frequency-Division Multiplexing (OFDM), Light sensor, Signal processing.


Exploring Use of Chatgpt for Integrated Design Environments: a Quality Model for CHATGPT

Ayse Kok Arslan, Oxford Alumni- Silicon Valley Chapter, CA, USA

ABSTRACT

Given the proliferation of ChatGPT into our every day lives, the aim of this paper is to present a quality design framework for integrating the use of ChatGPT in development environments. The study starts with a discussion of integrated design environments (IDEs) and explores its recent for generative AI models. It also highlights commonly used techniques in AIGC, and addresses concerns surrounding trustworthiness and responsibility in the field. Finally, it explores open problems and future directions for the use of this quality framework, highlighting potential avenues for innovation and progress.

KEYWORDS

AI, ChatGPT, integrated design, generative AI, trustworthiness.


Deciphering Cardiac Destiny: Unveiling Future Risks through Cutting-Edge Machine Learning Approaches

Ms.G.Divya, M. Naga Sravan Kumar, T. Jaya Dharani, B. Pavan and K. Praveen, Lakireddy Balireddy College of Engineering, India

ABSTRACT

Cardiac arrest remains a leading cause of death worldwide, necessitating proactive measures for early detection and intervention. This project aims to develop and assess predictive models for the timely identification of cardiac arrest incidents, utilizing a comprehensive dataset of clinical parameters and patient histories. Employing machine learning (ML) algorithms like XGBoost, Gradient Boosting, and Naive Bayes, alongside a deep learning (DL) approach with Recurrent Neural Networks (RNNs), we aim to enhance early detection capabilities. Rigorous experimentation and validation revealed the superior performance of the RNN model, which effectively captures complex temporal dependencies within the data. Our findings highlight the efficacy of these models in accurately predicting cardiac arrest likelihood, emphasizing the potential for improved patient care through early risk stratification and personalized interventions. By leveraging advanced analytics, healthcare providers can proactively mitigate cardiac arrest risk, optimize resource allocation, and improve patient outcomes. This research highlights the transformative potential of machine learning and deep learning techniques in managing cardiovascular risk and advances the field of predictive healthcare analytics.

KEYWORDS

Cardiac arrest prediction, Machine learning, Predictive modeling, Patient care, Healthcare analytics.


Conceptual Framework for Cost-effective Automatic Number Plate Recognition System

Varad Pramod Nimbalkar, Vishu Kumar, Saad Sayyed, Siddharth Bharadwaj, and Vedansh Gohil and Shamla Tushar Mantri

ABSTRACT

ANPR is a must for traffic control, law enforcement and automated toll collection systems. Traditional ANPR solutions are expensive, rely heavily on hardware and not suitable for mass adoption. A low-cost, high-performance ANPR system on a Raspberry Pi 4 device with camera module and Optical Character Recognition (OCR) capabilities. The system uses a Convolutional Neural Network (CNN) for OCR which can identify license plates with high accuracy even under different lighting conditions and resolutions. The pre-processing pipeline of the image includes noise removal and gray scaling to make extracting edges simpler, which aims for better license plate visibility in crowded frames. These pre-processed images are further passed to a plate localization extraction trained CNN so as the make it invariant of distortion and locale ensuring better accuracy in such number plate detection applications. This low-cost answer represents a viable option to traditional ANPR systems for larger scale applications in traffic control, law enforcement and automatic toll processing. In future iterations, we look to increase performance and robustness by providing more data from which the system can learn. Moreover, the incorporation of advanced functionalities like cloud-based analytics would strengthen system features that will aid in smart city infrastructure integrations. The advancements of these developments aim to increase ANPR system throughput at a scale that will play role in enabling safer and more efficient transportation systems.

KEYWORDS

ANPR, CNN, OCR, Raspberry Pi, Camera Module, Multi-Thread, Pipeline.


Teachers Experiences of Integrating Digital Technologies in Elt: A Digcompedu Perspective

Dammar Singh Saud, Far Western University, Nepal

ABSTRACT

Incorporating digital technologies into English Language Teaching (ELT) has become crucial for enriching the teaching and learning processes. However, in Nepal, a developing nation marked by challenging terrain, despite the widespread adoption of technology, there is a significant gap in our knowledge regarding the practices of teacher educators, especially those working in remote areas. To bridge this knowledge gap, this hermeneutic phenomenological study delves into the experiences of four English language teacher educators situated in Darchula, a remote district in Nepals far-western region, concerning the integration of digital technologies into English language teaching. Through semi-structured interviews and thematic analysis guided by the DigCompEdu Framework, this research uncovers teacher educators’ lived experiences of the incorporation of digital technologies into ELT. They use digital tools to enhance teaching and learning experiences, promote student engagement, enhance access to learning materials, and create dynamic and interactive learning environments. Nonetheless, the study emphasizes the necessity of addressing technical challenges and adopting a balanced approach when utilizing online resources to maximize benefits while mitigating drawbacks. By furnishing teacher educators and policymakers in Nepal with a deeper understanding of the importance of digital technologies and the potential offered by the DigCompEdu framework, this article strives to facilitate a more efficient integration of digital technologies within ELT classrooms.

KEYWORDS

Digital Technologies; English Language Teaching; DigCompEdu Framework; Teacher Educators, Digital Competence.


A Human-centered Design for Androidbased Oral Information Management Solution for Illiterate and Low Literate People of Pakistan

Ayalew Belay Habtie, Brett Hudson Matthews, David Myhre, Aman Aalam, My Oral Village, inc, Toronto, Canada

ABSTRACT

The advent of mobile money has transformed how people manage and conduct their financial transactions. However, conventional mobile money systems predominantly rely on text-based menus, posing significant challenges for illiterate and low-literate individuals in effectively using these services. In response, this study introduces a human-centered solution designed to closely mirror the familiar practices of handling cash among these user groups. Our solution comprises an interface layer, database layer, and digitalized currencies, allowing users to tap on virtual currencies or coins to perform various financial activities. Implemented within the Android environment, the solution includes tutorial videos to guide users in navigating and utilizing the application effectively. Our human-centered design approach for this Androidbased mobile money solution represents a significant advancement in enhancing financial empowerment for illiterate and low-literate individuals in Pakistan. By prioritizing user-friendly design principles and addressing the specific needs of these users, our application promotes greater financial inclusion and economic participation. This innovative solution not only bridges the gap between technological advancement and accessibility but also contributes to the socio-economic development of the country, fostering a more inclusive and equitable financial ecosystem.

KEYWORDS

Human-Centered Design, Digital Currency, Mobile Money, Financial Inclusion, Android Environment.


Integrating AI Tools Thoughtfully into the Curriculum to Complement Human Instructions

Mohammad Haseen, Department of English, King Abdul Aziz University, Jeddah, K.S.A

ABSTRACT

The integration of advanced AI-powered educational tools is revolutionizing the landscape of self-directed learning, offering unprecedented opportunities for personalization and learner autonomy. These cutting-edge technologies, such as adaptive learning systems and intelligent tutoring platforms, can create highly individualized learning pathways that respond dynamically to each students progress, preferences, and challenges (Holmes et al., 2019). By leveraging machine learning algorithms and natural language processing, these tools can analyse vast amounts of data to provide tailored content, real-time feedback, and targeted interventions, effectively supporting students in their self-directed learning journey (Zawacki-Richter et al., 2019). Moreover, AI-powered collaboration tools are fostering innovative peer-to-peer learning experiences by intelligently connecting students based on complementary skills and knowledge, thereby enhancing social learning and knowledge construction (Luckin et al., 2016). The effective implementation of these tools, however, requires a delicate balance between technological integration and human guidance, with educators playing a crucial role in promoting digital literacy, critical thinking, and ethical awareness among students as they navigate these AI-enhanced learning environments.

KEYWORDS

Human Judgment, Simulations, Technological Integration, Collaboration Tools.


Hardware-backed IOT Sensor Information Tracking on the Cardano Blockchain

A. Adhikari, M. Ramu, R. Thomas, H. Su, and B. Ramesh, International Centre for Neuromorphic Systems, The MARCS Institute, Western Sydney University, New South Wales, Australia

ABSTRACT

The tracking of sensor information has advanced significantly with the rise of the Internet of Things (IoT) and cloud computing, replacing local storage and records. However, it faces challenges such as data leakage, compromised privacy, data tampering, and origin misrepresentation due to mutable data storage and central points of failure. Blockchain-based solutions have been proposed, but they often suffer from high costs, limited scalability, and vulnerability to data tampering in cloud-based processes. This paper introduces a novel approach using Extended Unspent Transaction Output (eUTXO) blockchains, which offer better scalability, lower transaction costs, higher throughput, enhanced privacy, along with a tamper-resistant log. Our hardware-backed integration on the Cardano blockchain achieves decentralized edge IoT nodes and sensors, enhancing security and reliability in sensor data tracking. Our framework overcomes limitations of traditional blockchain methods and centralized cloud systems. By adopting this hardware-backed approach, IoT-based sensor tracking attains new levels of integrity and privacy. Comprehensive evaluations demonstrate the effectiveness and practicality of our system. The proposed framework addresses sensor tracking challenges and advances hardware-backed IoT solutions. We envision its application in secure and reliable industrial IoT systems, benefiting various industries with critical data tracking requirements.

KEYWORDS

Cardano Blockchain, Hardware-Backed Integration, Industrial IoT (IIoT), Sensor Information Tracking.


Blockchain Authentication Model for Vehicular Ad Hoc Networks

Abdelilah EL ihyaoui, My Abdelkader Youssefi, Ahmed Mouhsen, Department of Computer Engineering, Castle University, New City, CyprusHassan First University of Settat , Faculté des Sciences et Technique , Laboratoire d Ingénierie, de Management Industriel et dInnovation (LIMII), Morocco

ABSTRACT

The evolution of Vehicular Ad-hoc Networks (VANETs) has brought significant benefits, such as enhanced road safety and driver comfort. However, ensuring the security of these networks remains a critical challenge. Traditional security models rely on a centralized schema with a single certification authority to manage, create, and revoke certificates. This paper proposes a novel security scheme for managing vehicle identities across well-defined regions using two key concepts: Public Key Infrastructure (PKI) and blockchain technology. By leveraging blockchain technology, we aim to decentralize the authentication process, enhancing the robustness and security of VANETs.

KEYWORDS

V2V, PKI, Blockchain, security; hash message authentication code (HMAC), access token; authentication, vehicular ad hoc networks (VANETs)


Alertnet : Accident Alert System

Roshini Nair a/p Tamil Selvan, Department of Computer and Information Science Department, Universiti Teknologi PETRONAS, Malaysia

ABSTRACT

This project leverages the advancements in sensor technologies to enhance vehicle safety through a comprehensive Accident detection and Alert system. Integrating GPS, GSM, an Accelerometer, and NDIR gas sensor, the system aims to provide immediate alerts during accidents, facilitating timely emergency responses and reducing casualties by detecting hazardous gas leaks. Key functionalities include real-time vehicle tracking via GPS, automatic SMS alerts with location and sensor data through GSM, accident severity detection using an accelerometer and impact calculation algorithm, and harmful gas detection with an NDIR sensor. Data is collected by the ESP32 and stored in Firebase, with a Web Dashboard displaying the data visually in line charts. This integrated approach aims to improve vehicle safety, offering real-time alerts and environmental monitoring to potentially save lives and mitigate the consequences of accidents.

KEYWORDS

GPS, GSM, Accelerometer, NDIR Gas Sensor, ESP32.


A Unique Spacy and Textblob Bases Nlp Approach for Financial Data Analysis

Atharv Bhilare, Shubhangi Tidake, Vishnupant Potdar, Dept. of Data Science, Symbiosis Skills and Professional University, Pune, Maharashtra, India

ABSTRACT

This research paper investigates the use of Natural Language Processing (NLP) and sentiment analysis to detect financial statement fraud by analyzing textual data in financial reports. Focusing on companies with known fraud cases—Wirecard, Tesco, and Under Armour—the study applies sentiment analysis to uncover linguistic cues that may indicate fraudulent behaviour. Utilizing spaCy and spacytextblob NLP techniques, the research calculates polarity and subjectivity scores for the reports, comparing these to previous studies. The findings demonstrate that NLP can effectively detect financial statement fraud by revealing patterns and anomalies in the reports. The Spacy + SpacyTextBlob method significantly improves accuracy in identifying linguistic indicators of fraud, outperforming the TextBlob method and offering a more nuanced and precise detection of potential fraudulent activities.

KEYWORDS

Natural Language Processing (NLP), Sentiment Analysis, Financial Statement Fraud.


Bi-directional Head-driven Parsing for English to Indian Languages Machine Translation

Pavan Kurariya, Prashant Chaudhary, Jahnavi Bodhankar, Lenali Singh and Ajai Kumar, Centre for Development of Advanced Computing, Pune, India

ABSTRACT

In the age of Artificial Intelligence (AI), a significant breakthrough occurred as machines demonstrated their ability to communicate in human languages. This marked the beginning of a revolutionary period for Natural Language Processing, defined by unparalleled computational capabilities. Amidst this evolution, parsers stand as an indispensable component, facilitating syntactic comprehension and empowering various NLP applications, from Machine Translation to sentiment analysis. Parser plays a crucial role in deciphering the complex syntactic structures inherent in human languages. Parser enables computers to extract meaning, facilitate syntactic understanding, and support a wide range of NLP applications, including machine translation, sentiment analysis, and information retrieval. This research paper presents the implementation of Bi-Directional Head-Driven Parser, aiming to expand the horizons of NLP beyond the constraints of traditional early-type L-TAG (Lexicalized Tree Adjoining Grammar) Parsing. While effective, conventional Parsers encounter inherent limitations in grappling with the intricacies and subtleties of natural language. Through the utilization of Bi-Directional principles, Head-Driven techniques offer a revolutionary breakthrough in computational frameworks for large-scale grammar parsing, enabling complex NLP tasks such as discourse analysis and semantic parsing, and guaranteeing reliable linguistic analysis for practical applications. Through experimental analysis of Bi-Directional traversal, Interlingual considerations, and Modem architecture, this research showcases how new Parsers facilitate breakthroughs in language processing, syntactic analysis, semantic comprehension, and beyond. Moreover, it underscores the structural implications of integrating Head-Driven Parsing. Traditional approaches, such as Tree Adjoining Grammar (TAG), while valuable, often encounter limitations in capturing the full spectrum of linguistic phenomena, particularly in the context of cross-linguistic transfer between English and Indian languages. Recognizing the importance of NLP in addressing these challenges, this paper presents an implementation of a Bi-Directional Head-Driven Parser. Drawing upon the rich foundation of TAG and acknowledging its constraints, our approach transcends these limitations by harnessing advanced parsing traversal techniques and linguistic theories. By bridging the gap between theory and application, our approach not only enhances our understanding of syntactic parsing across language families but also surpasses the performance of an ‘Early-type Parser’ in terms of time and memory. Through rigorous experimentation and evaluation, this research contributes to the ongoing discourse on expanding the frontiers of Tree Adjoining Grammar-based research and shaping the trajectory of Machine Translation.

KEYWORDS

Artificial intelligence (AI), Natural Language Processing (NLP), Tree Adjoining Grammar (TAG), LTAG(Lexicalized Tree Adjoining Grammar).


An Investigation of Llms’ Limitations in Interpreting and Producing Indexicals With Chatgpt

Batuhan Erdogan, Bogazici University, Istanbul, Turkey

ABSTRACT

This study examines the limitations of OpenAIs ChatGPT models (GPT-3.5 and GPT-4) in interpreting and utilizing indexicals. While GPT-4 shows some performance improvements over GPT-3.5, both models frequently misinterpret indexicals in prompts and occasionally err in producing them in specific contexts. The models abilities vary with the type of contextual environment simulated by the user, demonstrating better competence in discrete environments and conversational implicatures. ChatGPT generally avoids context-dependent language in its responses. Through word frequency analysis of four demonstrative indexicals across essays written by humans and the two GPT models, we found GPT-4 significantly more likely to produce such indexicals than GPT-3.5. Inspired by Heideggers concept of Being-in-the-World, we propose a new training method using narratives with multiple first-person perspectives within a fictional world to enhance the models handling of pronominal indexicals.

KEYWORDS

Artificial Intelligence, Pragmatics, Semantics, Linguistics, Indexicals, LLMs, Artificial Neural Networks, Philosophy of Artificial Intelligence.


Enhancing Aspect-based Sentiment Analysis for Tamil Using Bi-lstm With Attention Mechanism

Venkat Vedanarayanan, Aniska Chatterjee, Jerush Imanto M and Joe Dhanith PR, School of Computer Science Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India

ABSTRACT

In this paper, we propose a novel Tamil Aspect-Based Sentiment Analysis (ABSA) method utilizing a Bidirectional Long Short-Term Memory (Bi-LSTM) network with an attention mechanism. This model effectively addresses the challenges presented by morphologically rich languages like Tamil. Our method is evaluated using a Tamil dataset derived from the Sem-Eval 2014 restaurant reviews, a benchmark for ABSA tasks. The Bi-LSTM-attention model significantly outperforms traditional machine learning and standard deep learning approaches, achieving an F1-score of 80% and a precision of 72% in sentiment classification. Detailed analysis, including an ablation study, underscores the importance of the Bi-LSTM and attention components. This work enhances sentiment analysis for low-resource languages, offers insights for other Tamil NLP tasks, and demonstrates the applicability of deep learning to other morphologically rich languages, paving the way for robust multilingual sentiment analysis systems.

KEYWORDS

Aspect-Based sentiment analysis, Attention, Bi-LSTM, Low Resource Language processing, Tamil Language.


Enhancing Test Automation With Deep Learning: Techniques, Challenges, and Future Prospects

NARENDAR KUMAR ALE, MS[IT] University of Cumberlands, Williamsburg, KY, USA

ABSTRACT

Test automation is crucial for maintaining software quality and efficiency, especially in todays fast-paced development environments. Deep learning, a subset of machine learning, offers promising advancements in automating complex testing processes. This paper explores various techniques of integrating deep learning into test automation, identifies the challenges faced, and discusses the prospects of this technology in enhancing software testing efficiency and effectiveness. Detailed case studies, future prospects, and comprehensive literature reviews are included to provide a thorough understanding of the subject.

KEYWORDS

Test Automation, Deep Learning, Software Testing, Machine Learning, AI.


AI Assisted Software Engineering

Prateek Khanna, Research Scholar, Amity University, Ranchi

ABSTRACT

AI-assisted software engineering (AI-SE) refers to the integration of artificial intelligence techniques into the software development lifecycle (SDLC) to augment developers capabilities and improve the overall development process. It aims to augment developer productivity using AI technology as an enabling tool. Increased efficiency and improved quality can potentially translate to lower development costs. AI-SE tools automate repetitive tasks across various development stages, freeing developers to focus on complex problem-solving. Code completion, static code analysis, and automated testing are just a few examples of AI-SE functionalities that streamline workflows and reduce development time. Beyond efficiency gains, AI-SE empowers developers by automating routine tasks. This frees them to explore new architectural designs, experiment with emerging technologies, and contribute more meaningfully to the overall software vision. AI-powered code generation and intelligent assistance tools can also bridge the skill gap between junior and senior developers, fostering a more inclusive and accessible software development environment. However, the effectiveness of AI-SE models heavily relies on the quality and diversity of data they are trained on. Biases in training data can lead to biased recommendations from AI tools. Mitigating bias through diverse datasets and ensuring explainability in AI-SE models are crucial for building trust with developers. Looking ahead, AI-SE is expected to become even more sophisticated. Advancements in generative AI for complex code creation, explainable AI for greater transparency, and AI-powered debugging tools are on the horizon. The seamless integration of AI-SE with Agile methodologies holds the promise of a continuous improvement cycle, ensuring high-quality software delivery at a faster pace. By embracing AI-SE and adapting development practices, software engineering teams can unlock new levels of efficiency, quality, and innovation. This paper provides an overview of AI-SE, its applications, benefits, challenges, and future outlook, making it a valuable resource for both developers and those interested in the future of software development.

KEYWORDS

AI ,SoftwareEngineering, AI-SE, SDLC.


GPS Accuracy and Enhancement in Android Development

Naga Satya Praveen Kumar Yadati, United States of America

ABSTRACT

This study investigates the factors impacting GPS services and describes the three segments of GPS services: space, control, and user segments. It highlights the limitations in open research for the space and control segments due to their use in military applications. The paper discusses various areas of development for GPS services, such as geotagging, optimizing routing and navigation, and improving data reception capabilities. It explores the implications of GPS technologies in fields like e-commerce, the automobile industry, and supply chain management. The study also forecasts the GPS tracking device market and emphasizes the need for more research in commercial GPS applications.

KEYWORDS

GPS technologies, Geo tagging, GNSS, Multi-GNSS, GPS accuracy, GPS enhancement, satellite navigation, differential GPS, real-time kinematic positioning, location-based services, navigation optimization, geotagging, autonomous vehicles, IoT integration, signal processing, augmented reality, GPS receiver design, GPS applications, control segment, space segment, user segment, GPS market, e-commerce, supply chain management, mobile GPS, GPS limitations, GPS research, GPS innovation, GPS technology development.


A Multimodal Approach for English to Telugu Translation With Fnet and Transformer Models

Joshita Malla, Ujwala Kanumuri, Swathi Mytreye Chaganti, and Swaminathan Jayaraman, Department of Computer Science and Engineering, Amrita School of Computing,Amrita Vishwa Vidyapeetham, Amritapuri, India

ABSTRACT

This research paper develops a multimodal translation system that is versatile in nature specifically designed for converting English to Telugu language, which addresses accessibility issues across various applications. Our method involves using ResNet50 to ensure that we have robust image feature extraction while using word tokenization to generate English and Telugu tokens. The translation is done through F-Net encoder-decoder architectures with the multimodal encoder acting as the de facto combiner of textual and visual information and the decoder generating the telugu translation to ensure the results are both accurate and contextually appropriate. Based on extensive experimental observations, considerable improvements were observed when compared with other unimodal approaches especially in terms of translation quality. Our framework goes beyond assistive devices for the blind and has the possibility of being used as a navigation aid, communication tool, or educational resource thereby improving access and inclusion among individuals who speak different languages.

KEYWORDS

Multimodal translation, ResNet50, Transformer-based encoder-decoder, accessibility, inclusivity.


Deep Fusion With Attention Neural Network for Soil Health Based Crop Yield Estimation Using Remote Sensing Images

Vijaykumar P. Yele1, R. R. Sedamkar2, Sujata Alegavi3, 1Research Scholar, Department of Electronics & Telecommunication Engineering,Thakur College of Engineering and Technology, India, 2Professor, Department of Computer Engineering,Thakur College of Engineering and Technology,India, 3Associate Professor, Department of Internet of Things,Thakur College of Engineering and Technology,India

ABSTRACT

Crop yield estimation, vital for agricultural planning, incorporates weather, soil health, and technology. Utilizing remote sensing to analyze soil health enhances agricultural management and resource optimization. Despite challenges like data accuracy and cloud interference, the proposed Multi-head Cross Attention with Capsule Energy Valley Network (MHCA-CEVN) tackles these issues. This research integrates Sentinel-1 and Sentinel-2 data with field measurements, employing advanced preprocessing and feature extraction methods, such as the guided multi-layer side window box filter and shearlet transform. The hybrid gold rush mantis search optimizer selects key features for a deep visual attention-based fusion method. The resulting MHCA-CEVN classification model achieves outstanding performance, with accuracy, sensitivity, error rate, f1-score, mean absolute percentage error, and symmetric mean absolute percentage error at 97.59%, 95.21%, 6.65%, 90.21%, 5.01%, and 0.042%, respectively. These metrics highlight the models efficacy in addressing diverse crop yield challenges, establishing it as a robust solution for remote sensing.

KEYWORDS

Crop yield estimation, MHCA-CEVN, Guided multi-layer side window box filter and shearlet transform, Hybrid gold rush mantis search optimizer, Deep Visual Attention.


Geospatial Intelligence Enhancement Using Advanced Data Science and Machine Learning: A Systematic Literature Review

Vinothkumar Kolluru, Sudeep Mungara, Advaitha Naidu Chintakunta, Lokesh Kolluru, Charan Sundar Telaganeni

ABSTRACT

In the era of rapid advancements in artificial intelligence, the geospatial field is experiencing transformative changes. Traditional methods for land cover classification and anomaly detection have often been inconsistent and inaccurate, leading to significant real-world issues such as resource misallocation, unnoticed illegal activities like deforestation, unmonitored topographical changes such as unauthorized constructions, unattended forest fires, and border fence crises, all of which exacerbate climate change and urbanization challenges. This study systematically explores various machine learning (ML) techniques and their application to publicly available geospatial datasets. Specifically, it compares selected Convolutional Neural Networks (CNNs) and other ML models on these datasets to evaluate multiple performance metrics and conduct a comparative analysis. While numerous ML models have been previously employed for land cover classification and anomaly detection, this review seeks to enhance performance metrics and improve classification accuracy. Prior studies have employed techniques such as Random Forest on Sentinel-2 data (Gromny et al., 2019), multiple regression approaches on Landsat data (Wu et al., 2016), and Principal Component Analysis (PCA) on OpenStreetMap data (Feldmeyer et al., 2020). Our study introduces the application of advanced models like VGG16, U-Net, and Isolation Forest to geospatial datasets, assessing their impact on enhancing land cover classification and anomaly detection. This research not only aims to achieve higher classification accuracy but also contributes to the field by providing insights into the effectiveness of these models and proposing future directions and opportunities.


Identification of Uterine Disorder using Different Deep Learning Models

Baby Vijilin and Dr.Anitha V.S, Govt.Engineering College Wayanad, Kerala Technological University, India

ABSTRACT

Uterine disorders significantly impact women’s reproductive health and overall well-being.The risk factors, early detection methods, and advancement in personalized treatment options, highlighting the importance of a multidisciplinary approach to manage these potentially life-threatening conditions. This paper details various research in diagnosing uterine disorders using a deep learning approach and a comparative study of various models in a deep learning approach in finding PCOS disorder from ultrasound images. Performance measures such as Accuracy, Precision, Recall, Confusion matrix, F1 Score, Training & Validation time and Execution time for prediction, are used to compare the results of different deep learning models.

KEYWORDS

Uterine Disorders, Deep Learning, ultrasound, PCOS.


Early Detection and Segmentation of Cancer Cell Image of Liver by Using Improved Marker-controlled Watershed Algorithm With Sobel Edge-detection Approach

Tahamina Yesmin1, Udit Kr Chakraborty2, Kisalaya Chakrabarti2, 1Department of Computer Science and Engineering, Sikkim Manipal Institute of Technology, MAjitar, Sikkim, 2Department of Electronics and Communication Engineering, Haldia Institute of Technology, Haldia, WB

ABSTRACT

Till date Liver cancer disease is one of the most critical and complex disease to detect and diagnose properly with the help of deadliest in different stages. It is very difficult in medical image analysis for cancer cell detection due to many features and segments need to be present while doing the diagnosis and extract the affected area. We know that CT scan, MRI and X-rays scan processes are there in medical image analysis to identifying the disease but from them also sometime we don’t get the clear and detailed information. Various studies have been presented and developed methods. Not effecting the idea of previous studies this study is going to represent a new way of seeing the segmentation approach through improved markercontrolled approach with the help of Sobel edge detection method to develop a new model of analysis the medical image so that we can get the less segmented image to reduce the noise coming from unprocessed image and get a clear view of bio-medical image. The process would help in early detection of cancer cells and helps in spreading it to other part with the help of proper diagnosis. This study also proposed the statistical analysis of images of which will receive after the proposed algorithm applied.

KEYWORDS

Morphology, Marker-controlled, Cancer, Segmentation, Medical, Image.


Power Spectrum Estimation Method to Detect Target Frequency From Ssvep Brain Signals

Hritik1, Mukesh Kumar Ojha2 and Manoj Kumar Mukul3, 11Department of Electronic Science, University of Delhi, Delhi, India, 2Department of ECE, Greater Noida Institute of Technology, Greater Noida, India, 3Department of Electronic Science, University of Delhi, Delhi, India

ABSTRACT

The authors of this paper focus on the combined approach of Empirical mode decomposition (EMD) and power spectrum estimation (PSE) to detect the target frequency from a single channel recorded electroencephalogram signal. SSVEP is a periodic signal captured with a recorded EEG signal when the subject gazes attention on visual stimuli flickering at a certain frequency. The most critical issue is determining the target flicker from the SSVEP brain signal. The single-channel EEG signals are decomposed into several oscillating functions known as Intrinsic Mode Functions (IMFs) using EMD. Various PSE methods such as FFT, Welch, Burg, and Yule-walker are applied to plot the magnitude spectrum of corresponding IMFs. Finally, a threshold detection technique is employed on chosen IMFs to identify the target frequency. The recorded EEG signals contain the stimulus frequencies from 6Hz to 15.8Hz. The obtained finding shows that the EMD+Welch method outperformed as compared to EMD+FFT, EMD +Yul-walker, and EMD + Burg method.

KEYWORDS

Empirical Mode Decomposition (EMD), Electroencephalography (EEG), Power Spectrum Estimation(PSE), Steady State Visual Evoked Potential (SSVEP), Brain-computer Interface (BCI).