Graduate Theses & Dissertations

Pages

Development of a Cross-Platform Solution for Calculating Certified Emission Reduction Credits in Forestry Projects under the Kyoto Protocol of the UNFCCC
This thesis presents an exploration of the requirements for and development of a software tool to calculate Certified Emission Reduction (CERs) credits for afforestation and reforestation projects conducted under the Clean Development Mechanism (CDM). We examine the relevant methodologies and tools to determine what is required to create a software package that can support a wide variety of projects involving a large variety of data and computations. During the requirements gathering, it was determined that the software package developed would need to support the ability to enter and edit equations at runtime. To create the software we used Java for the programming language, an H2 database to store our data, and an XML file to store our configuration settings. Through these choices, we can build a cross-platform software solution for the purpose outlined above. The end result is a versatile software tool through which users can create and customize projects to meet their unique needs as well as utilize the features provided to streamline the management of their CDM projects. Author Keywords: Carbon Emissions, Climate Change, Forests, Java, UNFCCC, XML
Sinc-Collocation Difference Methods for Solving the Gross-Pitaevskii Equation
The time-dependent Gross-Pitaevskii Equation, describing the movement of parti- cles in quantum mechanics, may not be solved analytically due to its inherent non- linearity. Hence numerical methods are of importance to approximate the solution. This study develops a discrete scheme in time and space to simulate the solution defined in a finite domain by using the Crank-Nicolson difference method and Sinc Collocation Methods (SCM), respectively. In theory and practice, the time discretiz- ing system decays errors in the second-order of accuracy, and SCMs are decaying errors exponentially. A new SCM with a unique boundary treatment is proposed and compared with the original SCM and other similar numerical techniques in time costs and numerical errors. As a result, the new SCM decays errors faster than the original one. Also, to attain the same accuracy, the new SCM interpolates fewer nodes than the original SCM, which saves computational costs. The new SCM is capable of approximating partial differential equations under different boundary con- ditions, which can be extensively applied in fitting theory. Author Keywords: Crank-Nicolson difference method, Gross-Pitaevskii Equation, Sinc-Collocation methods
Influence of geodemographic factors on electricity consumption and forecasting models
The residential sector is a major consumer of electricity, and its demand will rise by 65 percent by the end of 2050. The electricity consumption of a household is determined by various factors, e.g. house size, socio-economic status of the family, size of the family, etc. Previous studies have only identified a limited number of socio-economic and dwelling factors. In this thesis, we study the significance of 826 geodemographic factors on electricity consumption for 4917 homes in the City of London. Geodemographic factors cover a wide array of categories e.g. social, economic, dwelling, family structure, health, education, finance, occupation, and transport. Using Spearman correlation, we have identified 354 factors that are strongly correlated with electricity consumption. We also examine the impact of using geodemographic factors in designing forecasting models. In particular, we develop an encoder-decoder LSTM model which shows improved accuracy with geodemographic factors. We believe that our study will help energy companies design better energy management strategies. Author Keywords: Electricity forecasting, Encoder-decoder model, Geodemographic factors, Socio-economic factors
Pathways to Innovation
Research and development activities conducted at universities and firms fuel economic growth and play a key role in the process of innovation. Specifically, prior research has investigated the widespread university-to-firm research development path and concluded that universities are better suited for early stage of research while firms are better positioned for later stages. This thesis aims to present a novel explanation for the pervasive university-to-firm research development path. The model developed uses game theory to visualize and analyze interactions between a firm and university under different strategies. The results reveal that as academic research signals knowledge it helps attract tuition paying students. Generating these tuition revenues is facilitated by university research discoveries, which, once published, a firm can build upon to make new innovative products. In an environment of weak intellectual property rights, moreover, the university-to-firm research development path enables firms to bypass the hefty costs that are involved in basic research activities. The model also provides a range of solution scenarios where a university and firm may find it viable to initiate a research line. Author Keywords: Game theory, Intellectual property rights, Nash equilibrium, Research and development, University to-firm research path
Automated Grading of UML Class Diagrams
Learning how to model the structural properties of a problem domain or an object-oriented design in form of a class diagram is an essential learning task in many software engineering courses. Since grading UML assignments is a cumbersome and time-consuming task, there is a need for an automated grading approach that can assist the instructors by speeding up the grading process, as well as ensuring consistency and fairness for large classrooms. This thesis presents an approach for automated grading of UML class diagrams. A metamodel is proposed to establish mappings between the instructor solution and all the solutions for a class, which allows the instructor to easily adjust the grading scheme. The approach uses a grading algorithm that uses syntactic, semantic and structural matching to match a student's solutions with the instructor's solution. The efficiency of this automated grading approach has been empirically evaluated when applied in two real world settings: a beginner undergraduate class of 103 students required to create a object-oriented design model, and an advanced undergraduate class of 89 students elaborating a domain model. The experiment result shows that the grading approach should be configurable so that the grading approach can adapt the grading strategy and strictness to the level of the students and the grading styles of the different instructors. Also it is important to considering multiple solution variants in the grading process. The grading algorithm and tool are proposed and validated experimentally. Author Keywords: automated grading, class diagrams, model comparison
Fraud Detection in Financial Businesses Using Data Mining Approaches
The purpose of this research is to apply four methods on two data sets, a Synthetic dataset and a Real-World dataset, and compare the results to each other with the intention of arriving at methods to prevent fraud. Methods used include Logistic Regression, Isolation Forest, Ensemble Method and Generative Adversarial Networks. Results show that all four models achieve accuracies between 91% and 99% except Isolation Forest gave 69% accuracy for the Synthetic dataset. The four models detect fraud well when built on a training set and tested with a test set. Logistic Regression achieves good results with less computational eorts. Isolation Forest achieve lower results accuracies when the data is sparse and not preprocessed correctly. Ensemble Models achieve the highest accuracy for both datasets. GAN achieves good results but overts if a big number of epochs was used. Future work could incorporate other classiers. Author Keywords: Ensemble Method, GAN, Isolation forest, Logistic Regression, Outliers
Historic Magnetogram Digitization
The conversion of historical analog images to time series data was performed by using deconvolution for pre-processing, followed by the use of custom built digitization algorithms. These algorithms have been developed to be user friendly with the objective of aiding in the creation of a data set from decades of mechanical observations collected from the Agincourt and Toronto geomagnetic observatories beginning in the 1840s. The created algorithms follow a structure which begins with pre-processing followed by tracing and pattern detection. Each digitized magnetogram was then visually inspected, and the algorithm performance verified to ensure accuracy, and to allow the data to later be connected to create a long-running time-series. Author Keywords: Magnetograms
Augmented Reality Sandbox (Aeolian Box)
The AeolianBox is an educational and presentation tool extended in this thesis to represent the atmospheric boundary layer (ABL) flow over a deformable surface in the sandbox. It is a hybrid hardware cum mathematical model which helps users to visually, interactively and spatially fathom the natural laws governing ABL airflow. The AeolianBox uses a Kinect V1 camera and a short focal length projector to capture the Digital Elevation Model (DEM) of the topography within the sandbox. The captured DEM is used to generate a Computational Fluid Dynamics (CFD) model and project the ABL flow back onto the surface topography within the sandbox. AeolianBox is designed to be used in a classroom setting. This requires a low time cost for the ABL flow simulation to keep the students engaged in the classroom. Thus, the process of DEM capture and CFD modelling were investigated to lower the time cost while maintaining key features of the ABL flow structure. A mesh-time sensitivity analysis was also conducted to investigate the tradeoff between the number of cells inside the mesh and time cost for both meshing process and CFD modelling. This allows the user to make an informed decision regarding the level of detail desired in the ABL flow structure by changing the number of cells in the mesh. There are infinite possible surface topographies which can be created by molding sand inside the sandbox. Therefore, in addition to keeping the time cost low while maintaining key features of the ABL flow structure, the meshing process and CFD modelling are required to be robust to variety of different surface topographies. To achieve these research objectives, in this thesis, parametrization is done for meshing process and CFD modelling. The accuracy of the CFD model for ABL flow used in the AeolianBox was qualitatively validated with airflow profiles captured in the Trent Environmental Wind Tunnel (TEWT) at Trent University using the Laser Doppler Anemometer (LDA). Three simple geometries namely a hemisphere, cube and a ridge were selected since they are well studied in academia. The CFD model was scaled to the dimensions of the grid where the airflow was captured in TEWT. The boundary conditions were also kept the same as the model used in the AeolianBox. The ABL flow is simulated by using software like OpenFoam and Paraview to build and visualize a CFD model. The AeolianBox is interactive and capable of detecting hands using the Kinect camera which allows a user to interact and change the topography of the sandbox in real time. The AeolianBox’s software built for this thesis uses only opensource tools and is accessible to anyone with an existing hardware model of its predecessors. Author Keywords: Augmented Reality, Computational Fluid Dynamics, Kinect Projector Calibration, OpenFoam, Paraview
Representation Learning with Restorative Autoencoders for Transfer Learning
Deep Neural Networks (DNNs) have reached human-level performance in numerous tasks in the domain of computer vision. DNNs are efficient for both classification and the more complex task of image segmentation. These networks are typically trained on thousands of images, which are often hand-labelled by domain experts. This bottleneck creates a promising research area: training accurate segmentation networks with fewer labelled samples. This thesis explores effective methods for learning deep representations from unlabelled images. We train a Restorative Autoencoder Network (RAN) to denoise synthetically corrupted images. The weights of the RAN are then fine-tuned on a labelled dataset from the same domain for image segmentation. We use three different segmentation datasets to evaluate our methods. In our experiments, we demonstrate that through our methods, only a fraction of data is required to achieve the same accuracy as a network trained with a large labelled dataset. Author Keywords: deep learning, image segmentation, representation learning, transfer learning
Predicting Irregularities in Arrival Times for Toronto Transit Buses with LSTM Recurrent Neural Networks Using Vehicle Locations and Weather Data
Public transportation systems play important role in the quality of life of citizens in any metropolitan city. However, public transportation authorities face criticisms from commuters due to irregularities in bus arrival times. For example, transit bus users often complain when they miss the bus because it arrived too early or too late at the bus stop. Due to these irregularities, commuters may miss important appointments, wait for too long at the bus stop, or arrive late for work. This thesis seeks to predict the occurrence of irregularities in bus arrival times by developing machine learning models that use GPS locations of transit buses provided by the Toronto Transit Commission (TTC) and hourly weather data. We found that in nearly 37% of the time, buses either arrive early or late by more than 5 minutes, suggesting room for improvement in the current strategies employed by transit authorities. We compared the performance of three machine learning models, for which our Long Short-Term Memory (LSTM) [13] model outperformed all other models in terms of accuracy. The error rate for LSTM model was the lowest among Artificial Neural Network (ANN) and support vector regression (SVR). The improved accuracy achieved by LSTM is due to its ability to adjust and update the weights of neurons while maintaining long-term dependencies when encountering new stream of data. Author Keywords: ANN, LSTM, Machine Learning
Relationship Between Precarious Employment, Behaviour Addictions and Substance Use Among Canadian Young Adults
This thesis utilized a unique data-set, the Quinte Longitudinal Survey, to explore relationships among precarious employment and a range of mental health problems in a representative sample of Ontario young adults. Study 1 focused on various behavioural addictions (such as problem gambling, video gaming, internet use, exercise, compulsive shopping, and sex) and precarious employment. The results showed that precariously employed men were preoccupied with gambling and sex while their female counterparts preferred shopping. Gambling and excessive shopping diminished over time while excessive sexual practices increased. Study 2 focused on the association between precarious employment and substance abuse (such as tobacco, alcohol, cannabis, hallucinogens, stimulants, and other substances). The results showed that men used cannabis more than women, and the non-precarious employed group abused alcohol more than individuals in the precarious group. This research has implications for both health care professionals and intervention program developers when working with young adults in precarious jobs. Author Keywords: Behaviour Addictions, Precarious Employment, Substance Abuse, Young Adults
Exploring the Scalability of Deep Learning on GPU Clusters
In recent years, we have observed an unprecedented rise in popularity of AI-powered systems. They have become ubiquitous in modern life, being used by countless people every day. Many of these AI systems are powered, entirely or partially, by deep learning models. From language translation to image recognition, deep learning models are being used to build systems with unprecedented accuracy. The primary downside, is the significant time required to train the models. Fortunately, the time needed for training the models is reduced through the use of GPUs rather than CPUs. However, with model complexity ever increasing, training times even with GPUs are on the rise. One possible solution to ever-increasing training times is to use parallelization to enable the distributed training of models on GPU clusters. This thesis investigates how to utilise clusters of GPU-accelerated nodes to achieve the best scalability possible, thus minimising model training times. Author Keywords: Compute Canada, Deep Learning, Distributed Computing, Horovod, Parallel Computing, TensorFlow

Pages

Search Our Digital Collections

Query

Enabled Filters

  • (-) ≠ Reid
  • (-) ≠ Bowman
  • (-) ≠ Bell
  • (-) ≠ Weygang
  • (-) = Applied Modeling and Quantitative Methods
  • (-) ≠ Mathematics
  • (-) ≠ Kwan, Ryan
  • (-) ≠ Astronomy
  • (-) ≠ Castel, Sophie Terra Marguerite

Filter Results

Date

2004 - 2024
(decades)
Specify date range: Show
Format: 2024/05/18