Accepted Papers


Vegetation Typification Integrated With Time Series Using Google Earth Engine

K.Rohith1, T.Pranoom1, V.Hari Vamsi1, G.JayaLakshmi4, 1Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, India, 2VelagapudiRamakrishna Siddhartha Engineering College, Vijayawada,Andhra Pradesh, India.

ABSTRACT

Using the Google Earth Engine platform, we surveyed all vegetation types using information on remote areas. Specifically, we use Landsat images and Sentinel2A data for conversion. Our aim is to improve the quality of vegetation by typification of vegetation using the various capabilities of the Landsat images. We use state-of-art image processing and machine learning algorithms to accurately classify different plant species in selected study areas. We also track temporal changes in vegetation using Sentinel2A imagery, making it possible to analyze land cover changes over time. Our approach to facility monitoring and change detection is broad because it combines the unique and rich nature of Sentinel2A. change the world. Where we used the starch based mechanism for the plant based classification scientifically and then make them to classified (ndvi) index ranges.

KEYWORDS

Vegetation types, Remote sensing techniques, Multispectral, capabilities, Categorization, Land management decisions, Robust methodology.


Heart Disease Prediction Using Data Mining Classification Algorithms

Deepanshu Sharma and Dr. Siddhartha Chauhan, Department of Computer Science and Engineering, NIT Hamirpur, H.P., India.

ABSTRACT

Heart diseases, also referred to as "cardiovascular diseases," are a group of disorders that affect the heart. This illness can cause a heart attack, stroke, and other symptoms. After examining a few research papers on the subject, it became clear that the majority of them used a single machine learning algorithm to predict heart disease. A few of them state that they are unable to enhance their models performance through optimization techniques. As a result of these findings, they encountered some difficulties in effectively predicting heart disease using their suggested method. In an earlier study PCA was also used, but it failed to provide considerable accuracy for such a sensitive research area, i.e., medical diagnosis. Data for this method was gathered from the "Heart Disease UCI" UCI repository, which was accessible on Kaggle. Working upon the given dataset we used various dimensionality reduction techniques, using various classifiers and found out their effectiveness. Thus, we were able to get considerably higher accuracy (98%) by using certain techniques to de-noise data (checking correlations, outliers, removing them etc.), using the MLP classifier.

KEYWORDS

PCA, MLP, cardiovascular diseases, ML, Data mining.


Multi-faceted Question Complexity Estimation Targeting Topic Domain-specificity

Sujay R1, Suki Perumal1, Yash Nagraj1, Anushka Ghei2 and Srinivas K S1, 1Department of CSE(AI & ML), PES University, Karnataka, India, 2Department of CSE, PES University, Karnataka, India

ABSTRACT

Question difficulty estimation remains a multifaceted challenge in educational and assessment settings. Traditional approaches often focus on surface-level linguistic features or learner comprehension levels, neglecting the intricate interplay of factors contributing to question complexity. This paper presents a novel framework for domain-specific question difficulty estimation, leveraging a suite of NLP techniques and knowledge graph analysis. We introduce four key parameters: Topic Retrieval Cost, Topic Salience, Topic Coherence, and Topic Superficiality, each capturing a distinct facet of question complexity within a given subject domain. These parameters are operationalized through topic modelling, knowledge graph analysis, and information retrieval techniques. A model trained on these features demonstrates the efficacy of our approach in predicting question difficulty. By operationalizing these parameters, our framework offers a novel approach to question complexity estimation, paving the way for more effective question generation, assessment design, and adaptive learning systems across diverse academic disciplines.

KEYWORDS

Question difficulty estimation, knowledge graph analysis, BERT, Domain specific metrics, Topic modelling, Natural Language Processing, Question Answering, Cognitive Load, Learning Analytics.


Resume Analyzer

Vinayak Subray Hegde and Premalatha H M, Department of Computer Applications, PES University, Bengaluru.

ABSTRACT

The “Resume Analyzer” is the advanced web application which provides the solutions for both Recruiters and Applicants by using the Natural Language Processing (NLP) technology. Its main motto is analysing the uploaded resume and providing the prediction, suggestions or advice to the both Job seekers and recruiters. For candidates they’ll upload the resume in pdf format, and the web application provides the basic information, experience level, predicted job role, existing and recommended skills, course recommendation according to the predicted job role, YouTube links for interview and resume tips and ideas. And for recruiters it’ll analyse the resume and provides the basic information, existing and recommended skills, parsed information of whoever using the tool (for better recruiting process) and downloadable parsed information.

KEYWORDS

Recruitment, Resume Analysis, Natural Language Processing (NLP), Candidate Selection, Career Development, User-Friendly Experience.


Harnessing Machine Learning and Blockchain for Supply Chain Innovation: a Comprehensive Exploration of Individual Applications

Dr. P. Venkateswara rao, A. Bhuvana, M. Varshitha, T. Bhavishya, Department of Computer Science and Engineering, KL University, Vaddeswaram, Guntur

ABSTRACT

Blockchain, hailed as a groundbreaking technology, provides secure solutions in various fields. Its distributed database foundation eliminates single points of failure, resulting in time and cost savings. The evolving field of Machine Learning shows promising research prospects through its synergy with Blockchain. The decentralized nature of Blockchain, combined with the vast data in Machine Learning, opens up extensive exploration possibilities [1] positioned as a leader in shaping the internet, Blockchain offers a decentralized, secure, and auditable framework for data alteration and authentication, removing the need for intermediaries. The stability and privacy of Blockchains decentralized database are maintained through a consensus process that ensures data protection and validity [2].Through this exploration, we aim to highlight the various applications of these technologies and their impacts on optimizing supply chain processes.

KEYWORDS

decentralized architecture, predictive analytics, data augmentation, inventory management, shared ledger, data integrity.


A Deep Learning-based Phishing Detection System Using CNN, RNN, and Bilstm

Lakksh Tyagi, Dept. of Computer Science Vellore Institute Of Technology Vellore, India

ABSTRACT

URL phishing, a type of cyberattack using deceptive URLs and emails, aims to exploit user trust in electronic communication. This research explores the integration of Machine Learning (ML) and Deep Learning (DL) techniques within Artificial Intelligence (AI) to enhance cybersecurity. Unlike traditional ML methods that rely on human expertise for feature extraction, DL models streamline detection and classification in a unified phase, minimizing the need for manual engineering. The study focuses on three cutting-edge deep learning techniques—Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and Bidirectional Long Short-Term Memory (Bi-LSTM)—for predicting phishing URLs. A novel approach leveraging these models is proposed, emphasizing the Bi-LSTMs superior performance in capturing both past and future contextual information. The comprehensive evaluation of these models highlights the Bi-LSTM as the preeminent performer in phishing detection, offering significant real-time applications in cybersecurity.

KEYWORDS

URL phishing, Deep Learning, Convolutional Neural Networks, Phishing detection, Neural networks.


Bayesian Network and Idw Interpolation to Identify the Causal Relationship Between Human Intervention and Air Quality Index

Dr Hema Durairaj1 and L Priya Dharshini2, 1Senior Data Scientist, Publicis Sapient Pvt. Ltd., Bengaluru, KA, India, 2Postgraduate Student, Lady Doak College, Madurai, TN, India

ABSTRACT

As the explosion of the human population happens globally, meeting the demands for livelihood should also involve considerations for sustainability. Though there are several causes of global warming, air pollution makes a tremendous contribution to it. The Air Quality Index (AQI) measures how clean or polluted the air is in specific areas based on six major pollutants such as sulfur dioxide (SO2), nitrogen dioxide (NO2), ground-level ozone (O3), carbon monoxide (CO), and particulate matter (PM2.5 and PM10). There are six levels in the AQI, as "good," "satisfactory," "moderate," "poor," and "severe," that validate the score between 0-500. The implicit factor that affects the AQI is human movement within the environment. This research work involves real-time datasets collected from the TNPCB (Tamil Nadu Pollution Control Board) regarding Madurai’s AQI at three stations collected for the year 2021(during COVID-19 period). The Bayesian network exhibits the causal relationship between human movement and the Air Quality Index through probabilistic modelling. An IDW Interpolation chart is also visualized to conceptualize the human intervention (NIL, Partial and Complete) for the AQI value obtained in 3 stations.

KEYWORDS

Air Quality Index, Bayes Theorem, Bayesian Network, IDW Interpolation.


Aws Pricing Structure From Consumers Perspective

Vishal Karpe, Sahil Prajapati and Dr. Meetu kandpal, Lok Jagruti University, Ahmedabad,India

ABSTRACT

This study explores the nuances of pricing structures for Amazon Web Services (AWS), with a particular emphasis on comprehending the different models and cost control mechanisms that AWS offers. AWS has a variety of price plans designed to accommodate a wide range of user requirements, including an AWS Free Tier free trial period. Important pricing methods like pay-as-you-go, save when you commit, and pay less by using more are examined in this research, along with their consequences for users AWS investment strategy. The article offers insights into the relative advantages of AWS pricing structures through illustrations and a comparison analysis with Microsoft Azure. In addition, it looks at key cost control tools that enable users to track and maximize their AWS spending, such as the Amazon Price Analyzer, AWS Budgets, and Amazon Trusted Advisor. This document is to help customers maximize cost efficiency and resource usage in their cloud computing undertakings by offering a thorough overview of AWS pricing and cost management tactics.

KEYWORDS

Amazon Web Services (AWS), Pricing structures, Cost management tools, Comparative analysis, Cloud computing.


Proposing an Adaptive System for Enhancing the Efficiency of Low-power and Lossy Iot Networks

Anil Behal1, Lovejit Singh1, Manjeet Singh2, 1Department of CSE, Chandigarh university, 2Department of CSE, Sharda University

ABSTRACT

In the World of IoT, LLN networks are very lossy in nature and working on the batteries , these networks contain very less power in it. To increase the effiecency of RPL ,we are proposing new objective function which can work better than the existing objective functions like of0 and mrhof in terms of increased convergence time and decreased latency.


Security Concerns in Iot Light Bulbs: Investigating Covert Channels

Janvi Panwar and Ravisha Rohilla, Indira Gandhi Delhi Technical University for Women, India

ABSTRACT

The proliferation of Internet of Things (IoT) devices has raised significant concerns regarding their security vulnerabilities. This paper explores the security risks associated with smart light systems, focusing on covert communication channels. Drawing upon previous research highlighting vulnerabilities in communication protocols and encryption flaws, the study investigates the potential for exploiting smart light systems for covert data transmission. Specifically, the paper replicates and analyzes an attack method introduced by Ronen and Shamir, which utilizes the Philips Hue White lighting system to create a covert channel through visible light communication (VLC). Experimental results demonstrate the feasibility of transmitting data covertly through subtle variations in brightness levels, leveraging the inherent functionality of smart light bulbs. Despite limitations imposed by device constraints and communication protocols, the study underscores the need for heightened awareness and security measures in IoT environments. Ultimately, the findings emphasize the importance of implementing robust security practices and exercising caution when deploying networked IoT devices in sensitive environments.

KEYWORDS

Internet of Things (IoT), Security vulnerabilities, Smart light systems, ZigBee Light Link (ZLL) protocol, Denial-of-Service (DoS) attacks, Firmware manipulation, Covert communication channels, Visible light communication (VLC), Data exfiltration, Air-gapped networks, Smart LED bulbs, PWM modulation, Orthogonal Frequency-Division Multiplexing (OFDM), Light sensor, Signal processing.


Deciphering Cardiac Destiny: Unveiling Future Risks Through Cutting-edge Machine Learning Approaches

Ms.G.Divya, B.Pavan, T. Jaya Dharani, M. Naga Sravan Kumar and K. Praveen, Lakireddy Balireddy College of Engineering, India

ABSTRACT

Cardiac arrest remains a leading cause of death worldwide, necessitating proactive measures for early detection and intervention. This project focuses on developing and evaluating predictive models to promptly identify cardiac arrest incidents using a comprehensive dataset comprising clinical parameters and patient histories. Employing machine learning (ML) algorithms like XGBoost, Gradient Boosting, and Naive Bayes, alongside a deep learning (DL) approach with Recurrent Neural Networks (RNNs), we aim to enhance early detection capabilities. Rigorous experimentation and validation revealed the superior performance of the RNN model, which effectively captures complex temporal dependencies within the data. Our findings highlight the efficacy of these models in accurately predicting cardiac arrest likelihood, emphasizing the potential for improved patient care through early risk stratification and personalized interventions. By leveraging advanced analytics, healthcare providers can proactively mitigate cardiac arrest risk, optimize resource allocation, and improve patient outcomes. This research underscores the transformative potential of ML and DL techniques in cardiovascular risk management and contributes to the field of predictive healthcare analytics.

KEYWORDS

Cardiac arrest prediction, Machine learning, Predictive modeling, Patient care, Healthcare analytics.


Teachers Experiences of Integrating Digital Technologies in Elt: A Digcompedu Perspective

Dammar Singh Saud, Far Western University, Nepal

ABSTRACT

Incorporating digital technologies into English Language Teaching (ELT) has become crucial for enriching the teaching and learning processes. However, in Nepal, a developing nation marked by challenging terrain, despite the widespread adoption of technology, there is a significant gap in our knowledge regarding the practices of teacher educators, especially those working in remote areas. To bridge this knowledge gap, this hermeneutic phenomenological study delves into the experiences of four English language teacher educators situated in Darchula, a remote district in Nepals far-western region, concerning the integration of digital technologies into English language teaching. Through semi-structured interviews and thematic analysis guided by the DigCompEdu Framework, this research uncovers teacher educators’ lived experiences of the incorporation of digital technologies into ELT. They use digital tools to enhance teaching and learning experiences, promote student engagement, enhance access to learning materials, and create dynamic and interactive learning environments. Nonetheless, the study emphasizes the necessity of addressing technical challenges and adopting a balanced approach when utilizing online resources to maximize benefits while mitigating drawbacks. By furnishing teacher educators and policymakers in Nepal with a deeper understanding of the importance of digital technologies and the potential offered by the DigCompEdu framework, this article strives to facilitate a more efficient integration of digital technologies within ELT classrooms.

KEYWORDS

Digital Technologies; English Language Teaching; DigCompEdu Framework; Teacher Educators, Digital Competence.


A Human-centered Design for Androidbased Oral Information Management Solution for Illiterate and Low Literate People of Pakistan

Ayalew Belay Habtie, Brett Hudson Matthews, David Myhre, Aman Aalam, My Oral Village, inc, Toronto, Canada

ABSTRACT

The advent of mobile money has transformed how people manage and conduct their financial transactions. However, conventional mobile money systems predominantly rely on text-based menus, posing significant challenges for illiterate and low-literate individuals in effectively using these services. In response, this study introduces a human-centered solution designed to closely mirror the familiar practices of handling cash among these user groups. Our solution comprises an interface layer, database layer, and digitalized currencies, allowing users to tap on virtual currencies or coins to perform various financial activities. Implemented within the Android environment, the solution includes tutorial videos to guide users in navigating and utilizing the application effectively. Our human-centered design approach for this Androidbased mobile money solution represents a significant advancement in enhancing financial empowerment for illiterate and low-literate individuals in Pakistan. By prioritizing user-friendly design principles and addressing the specific needs of these users, our application promotes greater financial inclusion and economic participation. This innovative solution not only bridges the gap between technological advancement and accessibility but also contributes to the socio-economic development of the country, fostering a more inclusive and equitable financial ecosystem.

KEYWORDS

Human-Centered Design, Digital Currency, Mobile Money, Financial Inclusion, Android Environment.


Hardware-backed IOT Sensor Information Tracking on the Cardano Blockchain

A. Adhikari, M. Ramu, R. Thomas, H. Su, and B. Ramesh, International Centre for Neuromorphic Systems, The MARCS Institute, Western Sydney University, New South Wales, Australia

ABSTRACT

The tracking of sensor information has advanced significantly with the rise of the Internet of Things (IoT) and cloud computing, replacing local storage and records. However, it faces challenges such as data leakage, compromised privacy, data tampering, and origin misrepresentation due to mutable data storage and central points of failure. Blockchain-based solutions have been proposed, but they often suffer from high costs, limited scalability, and vulnerability to data tampering in cloud-based processes. This paper introduces a novel approach using Extended Unspent Transaction Output (eUTXO) blockchains, which offer better scalability, lower transaction costs, higher throughput, enhanced privacy, along with a tamper-resistant log. Our hardware-backed integration on the Cardano blockchain achieves decentralized edge IoT nodes and sensors, enhancing security and reliability in sensor data tracking. Our framework overcomes limitations of traditional blockchain methods and centralized cloud systems. By adopting this hardware-backed approach, IoT-based sensor tracking attains new levels of integrity and privacy. Comprehensive evaluations demonstrate the effectiveness and practicality of our system. The proposed framework addresses sensor tracking challenges and advances hardware-backed IoT solutions. We envision its application in secure and reliable industrial IoT systems, benefiting various industries with critical data tracking requirements.

KEYWORDS

Cardano Blockchain, Hardware-Backed Integration, Industrial IoT (IIoT), Sensor Information Tracking.


A Unique Spacy and Textblob Bases Nlp Approach for Financial Data Analysis

Atharv Bhilare, Shubhangi Tidake, Vishnupant Potdar, Dept. of Data Science, Symbiosis Skills and Professional University, Pune, Maharashtra, India

ABSTRACT

This research paper investigates the use of Natural Language Processing (NLP) and sentiment analysis to detect financial statement fraud by analyzing textual data in financial reports. Focusing on companies with known fraud cases—Wirecard, Tesco, and Under Armour—the study applies sentiment analysis to uncover linguistic cues that may indicate fraudulent behaviour. Utilizing spaCy and spacytextblob NLP techniques, the research calculates polarity and subjectivity scores for the reports, comparing these to previous studies. The findings demonstrate that NLP can effectively detect financial statement fraud by revealing patterns and anomalies in the reports. The Spacy + SpacyTextBlob method significantly improves accuracy in identifying linguistic indicators of fraud, outperforming the TextBlob method and offering a more nuanced and precise detection of potential fraudulent activities.

KEYWORDS

Natural Language Processing (NLP), Sentiment Analysis, Financial Statement Fraud.


Bi-directional Head-driven Parsing for English to Indian Languages Machine Translation

Pavan Kurariya, Prashant Chaudhary, Jahnavi Bodhankar, Lenali Singh and Ajai Kumar, Centre for Development of Advanced Computing, Pune, India

ABSTRACT

In the age of Artificial Intelligence (AI), a significant breakthrough occurred as machines demonstrated their ability to communicate in human languages. This marked the beginning of a revolutionary period for Natural Language Processing, defined by unparalleled computational capabilities. Amidst this evolution, parsers stand as an indispensable component, facilitating syntactic comprehension and empowering various NLP applications, from Machine Translation to sentiment analysis. Parser plays a crucial role in deciphering the complex syntactic structures inherent in human languages. Parser enables computers to extract meaning, facilitate syntactic understanding, and support a wide range of NLP applications, including machine translation, sentiment analysis, and information retrieval. This research paper presents the implementation of Bi-Directional Head-Driven Parser, aiming to expand the horizons of NLP beyond the constraints of traditional early-type L-TAG (Lexicalized Tree Adjoining Grammar) Parsing. While effective, conventional Parsers encounter inherent limitations in grappling with the intricacies and subtleties of natural language. Through the utilization of Bi-Directional principles, Head-Driven techniques offer a revolutionary breakthrough in computational frameworks for large-scale grammar parsing, enabling complex NLP tasks such as discourse analysis and semantic parsing, and guaranteeing reliable linguistic analysis for practical applications. Through experimental analysis of Bi-Directional traversal, Interlingual considerations, and Modem architecture, this research showcases how new Parsers facilitate breakthroughs in language processing, syntactic analysis, semantic comprehension, and beyond. Moreover, it underscores the structural implications of integrating Head-Driven Parsing. Traditional approaches, such as Tree Adjoining Grammar (TAG), while valuable, often encounter limitations in capturing the full spectrum of linguistic phenomena, particularly in the context of cross-linguistic transfer between English and Indian languages. Recognizing the importance of NLP in addressing these challenges, this paper presents an implementation of a Bi-Directional Head-Driven Parser. Drawing upon the rich foundation of TAG and acknowledging its constraints, our approach transcends these limitations by harnessing advanced parsing traversal techniques and linguistic theories. By bridging the gap between theory and application, our approach not only enhances our understanding of syntactic parsing across language families but also surpasses the performance of an ‘Early-type Parser’ in terms of time and memory. Through rigorous experimentation and evaluation, this research contributes to the ongoing discourse on expanding the frontiers of Tree Adjoining Grammar-based research and shaping the trajectory of Machine Translation.

KEYWORDS

Artificial intelligence (AI), Natural Language Processing (NLP), Tree Adjoining Grammar (TAG), LTAG(Lexicalized Tree Adjoining Grammar).


An Investigation of Llms’ Limitations in Interpreting and Producing Indexicals With Chatgpt

Batuhan Erdogan, Bogazici University, Istanbul, Turkey

ABSTRACT

This study examines the limitations of OpenAIs ChatGPT models (GPT-3.5 and GPT-4) in interpreting and utilizing indexicals. While GPT-4 shows some performance improvements over GPT-3.5, both models frequently misinterpret indexicals in prompts and occasionally err in producing them in specific contexts. The models abilities vary with the type of contextual environment simulated by the user, demonstrating better competence in discrete environments and conversational implicatures. ChatGPT generally avoids context-dependent language in its responses. Through word frequency analysis of four demonstrative indexicals across essays written by humans and the two GPT models, we found GPT-4 significantly more likely to produce such indexicals than GPT-3.5. Inspired by Heideggers concept of Being-in-the-World, we propose a new training method using narratives with multiple first-person perspectives within a fictional world to enhance the models handling of pronominal indexicals.

KEYWORDS

Artificial Intelligence, Pragmatics, Semantics, Linguistics, Indexicals, LLMs, Artificial Neural Networks, Philosophy of Artificial Intelligence.


Enhancing Aspect-based Sentiment Analysis for Tamil Using Bi-lstm With Attention Mechanism

Venkat Vedanarayanan, Aniska Chatterjee, Jerush Imanto M and Joe Dhanith PR, School of Computer Science Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India

ABSTRACT

In this paper, we propose a novel Tamil Aspect-Based Sentiment Analysis (ABSA) method utilizing a Bidirectional Long Short-Term Memory (Bi-LSTM) network with an attention mechanism. This model effectively addresses the challenges presented by morphologically rich languages like Tamil. Our method is evaluated using a Tamil dataset derived from the Sem-Eval 2014 restaurant reviews, a benchmark for ABSA tasks. The Bi-LSTM-attention model significantly outperforms traditional machine learning and standard deep learning approaches, achieving an F1-score of 80% and a precision of 72% in sentiment classification. Detailed analysis, including an ablation study, underscores the importance of the Bi-LSTM and attention components. This work enhances sentiment analysis for low-resource languages, offers insights for other Tamil NLP tasks, and demonstrates the applicability of deep learning to other morphologically rich languages, paving the way for robust multilingual sentiment analysis systems.

KEYWORDS

Aspect-Based sentiment analysis, Attention, Bi-LSTM, Low Resource Language processing, Tamil Language.


Enhancing Test Automation With Deep Learning: Techniques, Challenges, and Future Prospects

NARENDAR KUMAR ALE, MS[IT] University of Cumberlands, Williamsburg, KY, USA

ABSTRACT

Test automation is crucial for maintaining software quality and efficiency, especially in todays fast-paced development environments. Deep learning, a subset of machine learning, offers promising advancements in automating complex testing processes. This paper explores various techniques of integrating deep learning into test automation, identifies the challenges faced, and discusses the prospects of this technology in enhancing software testing efficiency and effectiveness. Detailed case studies, future prospects, and comprehensive literature reviews are included to provide a thorough understanding of the subject.

KEYWORDS

Test Automation, Deep Learning, Software Testing, Machine Learning, AI.


AI Assisted Software Engineering

Prateek Khanna, Research Scholar, Amity University, Ranchi

ABSTRACT

AI-assisted software engineering (AI-SE) refers to the integration of artificial intelligence techniques into the software development lifecycle (SDLC) to augment developers capabilities and improve the overall development process. It aims to augment developer productivity using AI technology as an enabling tool. Increased efficiency and improved quality can potentially translate to lower development costs. AI-SE tools automate repetitive tasks across various development stages, freeing developers to focus on complex problem-solving. Code completion, static code analysis, and automated testing are just a few examples of AI-SE functionalities that streamline workflows and reduce development time. Beyond efficiency gains, AI-SE empowers developers by automating routine tasks. This frees them to explore new architectural designs, experiment with emerging technologies, and contribute more meaningfully to the overall software vision. AI-powered code generation and intelligent assistance tools can also bridge the skill gap between junior and senior developers, fostering a more inclusive and accessible software development environment. However, the effectiveness of AI-SE models heavily relies on the quality and diversity of data they are trained on. Biases in training data can lead to biased recommendations from AI tools. Mitigating bias through diverse datasets and ensuring explainability in AI-SE models are crucial for building trust with developers. Looking ahead, AI-SE is expected to become even more sophisticated. Advancements in generative AI for complex code creation, explainable AI for greater transparency, and AI-powered debugging tools are on the horizon. The seamless integration of AI-SE with Agile methodologies holds the promise of a continuous improvement cycle, ensuring high-quality software delivery at a faster pace. By embracing AI-SE and adapting development practices, software engineering teams can unlock new levels of efficiency, quality, and innovation. This paper provides an overview of AI-SE, its applications, benefits, challenges, and future outlook, making it a valuable resource for both developers and those interested in the future of software development.

KEYWORDS

AI ,SoftwareEngineering, AI-SE, SDLC.


A Multimodal Approach for English to Telugu Translation With Fnet and Transformer Models

Joshita Malla, Ujwala Kanumuri, Swathi Mytreye Chaganti, and Swaminathan Jayaraman, Department of Computer Science and Engineering, Amrita School of Computing,Amrita Vishwa Vidyapeetham, Amritapuri, India

ABSTRACT

This research paper develops a multimodal translation system that is versatile in nature specifically designed for converting English to Telugu language, which addresses accessibility issues across various applications. Our method involves using ResNet50 to ensure that we have robust image feature extraction while using word tokenization to generate English and Telugu tokens. The translation is done through F-Net encoder-decoder architectures with the multimodal encoder acting as the de facto combiner of textual and visual information and the decoder generating the telugu translation to ensure the results are both accurate and contextually appropriate. Based on extensive experimental observations, considerable improvements were observed when compared with other unimodal approaches especially in terms of translation quality. Our framework goes beyond assistive devices for the blind and has the possibility of being used as a navigation aid, communication tool, or educational resource thereby improving access and inclusion among individuals who speak different languages.

KEYWORDS

Multimodal translation, ResNet50, Transformer-based encoder-decoder, accessibility, inclusivity.


Classification of Uterine Disorders Using Deep Learning Models

Baby Vijilin and Dr.Anitha V.S, Govt.Engineering College Wayanad, Kerala Technological University, India

ABSTRACT

Uterine disorders significantly impact women’s reproductive health and overall well-being.Common benign uterine disorders are uterine fibroids, adenomyosis, and polycystic ovary syndromes. Malignant uterine disorders are endometrial and cervical cancers. The risk factors, early detection methods, and advancements in personalized treatment options, highlighting the importance of a multidisciplinary approach to manage these potentially life-threatening conditions. In addition, attention is given to less common but clinically significant uterine disorders, including uterine anomalies, endometriosis, and uterine sarcomas. Diagnostic approaches, including imaging techniques and laboratory tests, are scrutinized for their accuracy and clinical utility in identifying these conditions. Magnetic resonance imaging (MRI), ultrasound, and hysteroscopy have emerged as pivotal tools in visualizing uterine anomalies, fibroids, and adenomyosis. The sensitivity and specificity of these modalities are critically evaluated, with an emphasis on their role in providing accurate and non-invasive diagnostic information. This paper details various research undergone in diagnosing uterine disorders using deep learning approach and also comparative study of various models in deep learning approach in finding PCOS disorder from ultrasound images.

KEYWORDS

Uterine Disorders, Deep Learning, ultrasound, PCOS.


Early Detection and Segmentation of Cancer Cell Image of Liver by Using Improved Marker-controlled Watershed Algorithm With Sobel Edge-detection Approach

Tahamina Yesmin1, Udit Kr Chakraborty2, Kisalaya Chakrabarti2, 1Department of Computer Science and Engineering, Sikkim Manipal Institute of Technology, MAjitar, Sikkim, 2Department of Electronics and Communication Engineering, Haldia Institute of Technology, Haldia, WB

ABSTRACT

Till date Liver cancer disease is one of the most critical and complex disease to detect and diagnose properly with the help of deadliest in different stages. It is very difficult in medical image analysis for cancer cell detection due to many features and segments need to be present while doing the diagnosis and extract the affected area. We know that CT scan, MRI and X-rays scan processes are there in medical image analysis to identifying the disease but from them also sometime we don’t get the clear and detailed information. Various studies have been presented and developed methods. Not effecting the idea of previous studies this study is going to represent a new way of seeing the segmentation approach through improved markercontrolled approach with the help of Sobel edge detection method to develop a new model of analysis the medical image so that we can get the less segmented image to reduce the noise coming from unprocessed image and get a clear view of bio-medical image. The process would help in early detection of cancer cells and helps in spreading it to other part with the help of proper diagnosis. This study also proposed the statistical analysis of images of which will receive after the proposed algorithm applied.

KEYWORDS

Morphology, Marker-controlled, Cancer, Segmentation, Medical, Image.


Power Spectrum Estimation Method to Detect Target Frequency From Ssvep Brain Signals

Hritik1, Mukesh Kumar Ojha2 and Manoj Kumar Mukul3, 11Department of Electronic Science, University of Delhi, Delhi, India, 2Department of ECE, Greater Noida Institute of Technology, Greater Noida, India, 3Department of Electronic Science, University of Delhi, Delhi, India

ABSTRACT

The authors of this paper focus on the combined approach of Empirical mode decomposition (EMD) and power spectrum estimation (PSE) to detect the target frequency from a single channel recorded electroencephalogram signal. SSVEP is a periodic signal captured with a recorded EEG signal when the subject gazes attention on visual stimuli flickering at a certain frequency. The most critical issue is determining the target flicker from the SSVEP brain signal. The single-channel EEG signals are decomposed into several oscillating functions known as Intrinsic Mode Functions (IMFs) using EMD. Various PSE methods such as FFT, Welch, Burg, and Yule-walker are applied to plot the magnitude spectrum of corresponding IMFs. Finally, a threshold detection technique is employed on chosen IMFs to identify the target frequency. The recorded EEG signals contain the stimulus frequencies from 6Hz to 15.8Hz. The obtained finding shows that the EMD+Welch method outperformed as compared to EMD+FFT, EMD +Yul-walker, and EMD + Burg method.

KEYWORDS

Empirical Mode Decomposition (EMD), Electroencephalography (EEG), Power Spectrum Estimation(PSE), Steady State Visual Evoked Potential (SSVEP), Brain-computer Interface (BCI).