Faculty of Engineering, Computer Science & Engineering
http://dl.lib.uom.lk/handle/123/47
Theses / Dissertations submitted to Department of Computer Science & Engineering2024-03-29T08:30:47ZAn Adaptive software architectural framework for an interactive learning toolkit
http://dl.lib.uom.lk/handle/123/22104
An Adaptive software architectural framework for an interactive learning toolkit
Jayasiriwardene DPS
At present, a significant demand has emerged for online education tools that can used as a replacement for classroom education. Due to the ease of access and the highavailability of mobile devices, the preference of many users is focused on m-learningapplications. Thus, this study presents an adaptive software architectural framewfor an interactive learning toolkit. As a case study, the application is applied toprimary education sector in Sri Lanka, as there is a lack of learning tools that allteachers and students to interact effectively. Accordingly, a software architecturaframework was designed with the features of adaptivity, learning content authoring,
learning content management, low resource utilization, and low power consumptiThe study includes an extensive literature review conducted to identify unique gapsexisting studies. Further, the study designs and develops an architecture with intended feature effectively embedded in it. Furthermore, an m-learning applicanamed “iLearn” is developed as a proof-of-concept by implementing the architecturdesign. Moreover, the prototype was evaluated for functional requirements successfully conducting unit tests and user interface tests. The non-functionarequirements of the application were evaluated by conducting a system usabilsurvey for 20 teachers and 20 students, which received a good usability score of 80.5%and 83.6%, respectively. Also, the performance of the application was tested received a good overall outlook on performance where it was found that the applicahas a below-average consumption of memory, CPU, and battery at peak performancThe application is concluded as a success, with the potential to enhance with cuttingedge technology.
2023-01-01T00:00:00ZComputational model for glaucoma classification
http://dl.lib.uom.lk/handle/123/22201
Computational model for glaucoma classification
Shyamalee KWT
Glaucoma is a leading cause of blindness and affects millions of people worldwide.
It is a chronic eye condition that damages the optic nerve and if left untreated, it can
lead to vision loss and decreased quality of life. According to the World Health Organization,
it affects approximately 65 million people worldwide. Thus, there is a requirement
for an effective and reliable mechanism for the identification of Glaucoma.
This study addresses a computational model for the Glaucoma identification process.
The proposed system uses fundus images of the eye. The availability of computing
resources and automated glaucoma diagnosis tools can now be supported due to recent
developments in DL. Generic Convolutional Neural Networks (CNN) is still not
frequently used in medical situations despite the advances made by deep learning in
disease diagnosis using medical images. This is because of the limited trustworthiness
of these models. Despite the rise in popularity of deep learning-based glaucoma classification
in recent years, few studies have focused on the models’ explainability and
interpretability, which boosts user confidence in such applications. To predict glaucoma
conditions, this study uses state-of-the-art deep learning techniques to segment
and classify fundus images. To make the results more understandable, visualization
techniques are used to present the findings. Our forecasts are based on a modified
InceptionV3 architecture and a U-Net with attention mechanisms. Additionally, us-
ing Gradient-weighted Class Activation Mapping (Grad-CAM) and Grad-CAM++,
we create heatmaps that show the areas that had an impact on the glaucoma diagnosis.
With the RIM-ONE dataset, our findings demonstrate the best accuracy, sensitivity,
and specificity values of 98.97%, 99.42%, and 95.59%, respectively. With the aid of
fundus images, this model can be used to support automated glaucoma diagnosis.
2023-01-01T00:00:00ZFlexible and extensible infrastructure monitoring architecture for computing grids with infrastructure aware job matching
http://dl.lib.uom.lk/handle/123/22213
Flexible and extensible infrastructure monitoring architecture for computing grids with infrastructure aware job matching
Wijethunga RMKD
Many research experiments with large data processing requirements rely on massive,
distributed Computing Grids for their computational requirements. A Computing Grid
is built by combining a large number of individual computing sites distributed globally.
These Grid sites are maintained by different institutions across the world and contribute
thousands of worker nodes possessing different capabilities and configurations.
Developing software for Grid operations that works on all nodes while harnessing the
maximum capabilities offered by any given Grid site is challenging without knowing
what capabilities each site offers in advance. This research focuses on developing an
architecture-independent Grid infrastructure monitoring design to monitor the infrastructure
capabilities and configurations of worker nodes at sites across a Computing
Grid without the need to contact local site administrators. The design presents a highly
flexible and extensible architecture that offers infrastructure metric collection without
local agent installations at Grid sites. The resulting design is used to implement a Grid
infrastructure monitoring framework called “Site Sonar v2.0” that is currently being
used to monitor the infrastructure of 7,000+ worker nodes across 60+ Grid sites in the
ALICE Computing Grid. The proposed design is then used to introduce an improved
Job matching architecture for Computing Grids that allows job matching based on any
infrastructure property of the worker nodes. This dissertation introduces the proposed
architecture for a highly flexible and extensible Grid infrastructure monitoring design
and an improved job design for Computing Grids and the implementation of those designs
to derive important findings about the infrastructure of ALICE Computing Grid
while improving its job matching capabilities. This work provides a significant contribution
to the development of distributed Computing Grids, particularly in terms of
providing a more efficient and effective way to monitor infrastructure and match jobs
to worker nodes.
2023-01-01T00:00:00ZMicro data model architecture for AML scoring rule engines
http://dl.lib.uom.lk/handle/123/21546
Micro data model architecture for AML scoring rule engines
Maduranga WAH
Online and mobile banking have become a primary service of today’s banking and
financial sector. Clients could do their primary transactional jobs without physically
appearing on the bank. This facility is 24x7 available. So, detection of money
laundering activities based on transactional data analysis is a key challengeable area in
today’s banking and financial sector.
Businesses are trying to prevent money laundering activities by applying rule-based
techniques to the real time operational transactions which could not completely cure
the problem because higher constraints on the operational transaction could
inconvenience the legal customer base and lose the customer satisfaction over the time.
So, the near-real time and traditional data warehousing approaches with post detection
techniques becomes the most common approach to detect money laundering activities
in today’s banking and financial context.
Traditional data warehousing approaches loaded data from operational or transactional
systems on a weekly or nightly basis. Near real-time and real-time data warehouse
approaches use real-time ETL tools to load data into the data warehouse in predefined
shorter time intervals which preserve a gap with real-time transactional data. In
addition to that, running anomaly detection engines (rule based or machine learning
models) on top of those massive amounts of data (either OLTP databases or warehouse
database) will take another considerable time due to higher velocity of data. So,
identifying money launderers by analyzing post detection techniques causes higher risk
to the financial system because the money launderer may leave the financial system
before the money launderer catches.
This report introduce a novel data modelling architecture named “Micro Data Model
Architecture” and an associated supporting tool named “Micro Temporal Database
Generator” for “scoring rule engines” to detect financial fraudulent activities earlier by
removing the burden on operational data sources.
2022-01-01T00:00:00Z