Graduate Theses & Dissertations


Assessing the Cost of Reproduction between Male and Female Sex Functions in Hermaphroditic Plants
The cost of reproduction refers to the use of resources for the production of offspring that decreases the availability of resources for future reproductive events and other biological processes. Models of sex-allocation provide insights into optimal patterns of resource investment in male and female sex functions and have been extended to include other components of the life history, enabling assessment of the costs of reproduction. These models have shown that, in general, costs of reproduction through female function should usually exceed costs through male function. However, those previous models only considered allocations from a single pool of shared resources. Recent studies have indicated that the type of resource currency can differ for female and male sex functions, and that this might affect costs of reproduction via effects on other components of the life history. Using multiple invasibility analysis, this study examined resource allocation to male and female sex functions, while simultaneously considering allocations to survival and growth. Allocation patterns were modelled using both shared and separate resource pools. Under shared resources, allocation patterns to male and female sex function followed the results of earlier models. When resource pools were separate, however, allocations to male function often exceeded allocations to female function, even if fitness gains increased less strongly with investment in male function than with investment in female function. These results demonstrate that the costs of reproduction are affected by (1) the types of resources needed for reproduction via female or male function and (2) via trade-offs with other components of the life history. Future studies of the costs of reproduction should examine whether allocations to reproduction via female versus male function usually entail the use of different types of resources. Author Keywords: Cost of Reproduction, Gain Curve, Life History, Resource Allocation Patterns, Resource Currencies
Modelling Depressive Symptoms in Emerging Adulthood
Depression during the transition into adulthood is a growing mental health concern, with overwhelming evidence linking the developmental risk for depressive symptoms with maternal depression. In addition, there is a lack of research on the protective role of socioemotional competencies in this context. This study examines independent and joint effects of maternal depression and trait emotional intelligence (TEI) on the longitudinal trajectory of depressive symptoms during emerging adulthood. A series of latent growth models was applied to three biennial cycles of data from a nationally representative sample (N=933) from the Canadian National Longitudinal Survey of Children and Youth. We assessed the trajectory of self-reported depressive symptoms from age 20 to 24 years, as well as whether it was moderated by maternal depression at age 10 to 11 and TEI at age 20, separately by gender. The results indicated that mean levels of depression declined during the emerging adulthood in females, but remained relatively stable in males. Maternal depressive symptoms significantly positively predicted depressive symptoms across the entire emerging adulthood in females, but only at age 20-21 for males. In addition, likelihood of developing depressive symptoms was attenuated by higher global TEI in both females and males, and additionally by higher interpersonal skills in males. Our findings suggest that interventions for depressive symptoms in emerging adulthood should consider development of socioemotional competencies. Author Keywords: Depression, Depressive Symptoms, Emerging Adulthood, Intergenerational Risk, Longitudinal, Trait Emotional Intelligence
Characteristics of Models for Representation of Mathematical Structure in Typesetting Applications and the Cognition of Digitally Transcribing Mathematics
The digital typesetting of mathematics can present many challenges to users, especially those of novice to intermediate experience levels. Through a series of experiments, we show that two models used to represent mathematical structure in these typesetting applications, the 1-dimensional structure based model and the 2-dimensional freeform model, cause interference with users' working memory during the process of transcribing mathematical content. This is a notable finding as a connection between working memory and mathematical performance has been established in the literature. Furthermore, we find that elements of these models allow them to handle various types of mathematical notation with different degrees of success. Notably, the 2-dimensional freeform model allows users to insert and manipulate exponents with increased efficiency and reduced cognitive load and working memory interference while the 1-dimensional structure based model allows for handling of the fraction structure with greater efficiency and decreased cognitive load. Author Keywords: mathematical cognition, mathematical software, user experience, working memory
Development of a Cross-Platform Solution for Calculating Certified Emission Reduction Credits in Forestry Projects under the Kyoto Protocol of the UNFCCC
This thesis presents an exploration of the requirements for and development of a software tool to calculate Certified Emission Reduction (CERs) credits for afforestation and reforestation projects conducted under the Clean Development Mechanism (CDM). We examine the relevant methodologies and tools to determine what is required to create a software package that can support a wide variety of projects involving a large variety of data and computations. During the requirements gathering, it was determined that the software package developed would need to support the ability to enter and edit equations at runtime. To create the software we used Java for the programming language, an H2 database to store our data, and an XML file to store our configuration settings. Through these choices, we can build a cross-platform software solution for the purpose outlined above. The end result is a versatile software tool through which users can create and customize projects to meet their unique needs as well as utilize the features provided to streamline the management of their CDM projects. Author Keywords: Carbon Emissions, Climate Change, Forests, Java, UNFCCC, XML
Pathways to Innovation
Research and development activities conducted at universities and firms fuel economic growth and play a key role in the process of innovation. Specifically, prior research has investigated the widespread university-to-firm research development path and concluded that universities are better suited for early stage of research while firms are better positioned for later stages. This thesis aims to present a novel explanation for the pervasive university-to-firm research development path. The model developed uses game theory to visualize and analyze interactions between a firm and university under different strategies. The results reveal that as academic research signals knowledge it helps attract tuition paying students. Generating these tuition revenues is facilitated by university research discoveries, which, once published, a firm can build upon to make new innovative products. In an environment of weak intellectual property rights, moreover, the university-to-firm research development path enables firms to bypass the hefty costs that are involved in basic research activities. The model also provides a range of solution scenarios where a university and firm may find it viable to initiate a research line. Author Keywords: Game theory, Intellectual property rights, Nash equilibrium, Research and development, University to-firm research path
Automated Grading of UML Class Diagrams
Learning how to model the structural properties of a problem domain or an object-oriented design in form of a class diagram is an essential learning task in many software engineering courses. Since grading UML assignments is a cumbersome and time-consuming task, there is a need for an automated grading approach that can assist the instructors by speeding up the grading process, as well as ensuring consistency and fairness for large classrooms. This thesis presents an approach for automated grading of UML class diagrams. A metamodel is proposed to establish mappings between the instructor solution and all the solutions for a class, which allows the instructor to easily adjust the grading scheme. The approach uses a grading algorithm that uses syntactic, semantic and structural matching to match a student's solutions with the instructor's solution. The efficiency of this automated grading approach has been empirically evaluated when applied in two real world settings: a beginner undergraduate class of 103 students required to create a object-oriented design model, and an advanced undergraduate class of 89 students elaborating a domain model. The experiment result shows that the grading approach should be configurable so that the grading approach can adapt the grading strategy and strictness to the level of the students and the grading styles of the different instructors. Also it is important to considering multiple solution variants in the grading process. The grading algorithm and tool are proposed and validated experimentally. Author Keywords: automated grading, class diagrams, model comparison
Fraud Detection in Financial Businesses Using Data Mining Approaches
The purpose of this research is to apply four methods on two data sets, a Synthetic dataset and a Real-World dataset, and compare the results to each other with the intention of arriving at methods to prevent fraud. Methods used include Logistic Regression, Isolation Forest, Ensemble Method and Generative Adversarial Networks. Results show that all four models achieve accuracies between 91% and 99% except Isolation Forest gave 69% accuracy for the Synthetic dataset. The four models detect fraud well when built on a training set and tested with a test set. Logistic Regression achieves good results with less computational eorts. Isolation Forest achieve lower results accuracies when the data is sparse and not preprocessed correctly. Ensemble Models achieve the highest accuracy for both datasets. GAN achieves good results but overts if a big number of epochs was used. Future work could incorporate other classiers. Author Keywords: Ensemble Method, GAN, Isolation forest, Logistic Regression, Outliers
Framework for Testing Time Series Interpolators
The spectrum of a given time series is a characteristic function describing its frequency properties. Spectrum estimation methods require time series data to be contiguous in order for robust estimators to retain their performance. This poses a fundamental challenge, especially when considering real-world scientific data that is often plagued by missing values, and/or irregularly recorded measurements. One area of research devoted to this problem seeks to repair the original time series through interpolation. There are several algorithms that have proven successful for the interpolation of considerably large gaps of missing data, but most are only valid for use on stationary time series: processes whose statistical properties are time-invariant, which is not a common property of real-world data. The Hybrid Wiener interpolator is a method that was designed for repairing nonstationary data, rendering it suitable for spectrum estimation. This thesis work presents a computational framework designed for conducting systematic testing on the statistical performance of this method in light of changes to gap structure and departures from the stationarity assumption. A comprehensive audit of the Hybrid Wiener Interpolator against other state-of-the art algorithms will also be explored. Author Keywords: applied statistics, hybrid wiener interpolator, imputation, interpolation, R statistical software, time series
Historic Magnetogram Digitization
The conversion of historical analog images to time series data was performed by using deconvolution for pre-processing, followed by the use of custom built digitization algorithms. These algorithms have been developed to be user friendly with the objective of aiding in the creation of a data set from decades of mechanical observations collected from the Agincourt and Toronto geomagnetic observatories beginning in the 1840s. The created algorithms follow a structure which begins with pre-processing followed by tracing and pattern detection. Each digitized magnetogram was then visually inspected, and the algorithm performance verified to ensure accuracy, and to allow the data to later be connected to create a long-running time-series. Author Keywords: Magnetograms
Augmented Reality Sandbox (Aeolian Box)
The AeolianBox is an educational and presentation tool extended in this thesis to represent the atmospheric boundary layer (ABL) flow over a deformable surface in the sandbox. It is a hybrid hardware cum mathematical model which helps users to visually, interactively and spatially fathom the natural laws governing ABL airflow. The AeolianBox uses a Kinect V1 camera and a short focal length projector to capture the Digital Elevation Model (DEM) of the topography within the sandbox. The captured DEM is used to generate a Computational Fluid Dynamics (CFD) model and project the ABL flow back onto the surface topography within the sandbox. AeolianBox is designed to be used in a classroom setting. This requires a low time cost for the ABL flow simulation to keep the students engaged in the classroom. Thus, the process of DEM capture and CFD modelling were investigated to lower the time cost while maintaining key features of the ABL flow structure. A mesh-time sensitivity analysis was also conducted to investigate the tradeoff between the number of cells inside the mesh and time cost for both meshing process and CFD modelling. This allows the user to make an informed decision regarding the level of detail desired in the ABL flow structure by changing the number of cells in the mesh. There are infinite possible surface topographies which can be created by molding sand inside the sandbox. Therefore, in addition to keeping the time cost low while maintaining key features of the ABL flow structure, the meshing process and CFD modelling are required to be robust to variety of different surface topographies. To achieve these research objectives, in this thesis, parametrization is done for meshing process and CFD modelling. The accuracy of the CFD model for ABL flow used in the AeolianBox was qualitatively validated with airflow profiles captured in the Trent Environmental Wind Tunnel (TEWT) at Trent University using the Laser Doppler Anemometer (LDA). Three simple geometries namely a hemisphere, cube and a ridge were selected since they are well studied in academia. The CFD model was scaled to the dimensions of the grid where the airflow was captured in TEWT. The boundary conditions were also kept the same as the model used in the AeolianBox. The ABL flow is simulated by using software like OpenFoam and Paraview to build and visualize a CFD model. The AeolianBox is interactive and capable of detecting hands using the Kinect camera which allows a user to interact and change the topography of the sandbox in real time. The AeolianBox’s software built for this thesis uses only opensource tools and is accessible to anyone with an existing hardware model of its predecessors. Author Keywords: Augmented Reality, Computational Fluid Dynamics, Kinect Projector Calibration, OpenFoam, Paraview
Representation Learning with Restorative Autoencoders for Transfer Learning
Deep Neural Networks (DNNs) have reached human-level performance in numerous tasks in the domain of computer vision. DNNs are efficient for both classification and the more complex task of image segmentation. These networks are typically trained on thousands of images, which are often hand-labelled by domain experts. This bottleneck creates a promising research area: training accurate segmentation networks with fewer labelled samples. This thesis explores effective methods for learning deep representations from unlabelled images. We train a Restorative Autoencoder Network (RAN) to denoise synthetically corrupted images. The weights of the RAN are then fine-tuned on a labelled dataset from the same domain for image segmentation. We use three different segmentation datasets to evaluate our methods. In our experiments, we demonstrate that through our methods, only a fraction of data is required to achieve the same accuracy as a network trained with a large labelled dataset. Author Keywords: deep learning, image segmentation, representation learning, transfer learning
Support Vector Machines for Automated Galaxy Classification
Support Vector Machines (SVMs) are a deterministic, supervised machine learning algorithm that have been successfully applied to many areas of research. They are heavily grounded in mathematical theory and are effective at processing high-dimensional data. This thesis models a variety of galaxy classification tasks using SVMs and data from the Galaxy Zoo 2 project. SVM parameters were tuned in parallel using resources from Compute Canada, and a total of four experiments were completed to determine if invariance training and ensembles can be utilized to improve classification performance. It was found that SVMs performed well at many of the galaxy classification tasks examined, and the additional techniques explored did not provide a considerable improvement. Author Keywords: Compute Canada, Kernel, SDSS, SHARCNET, Support Vector Machine, SVM


Search Our Digital Collections


Enabled Filters

  • (-) ≠ Reid
  • (-) ≠ Bowman
  • (-) ≠ Bell
  • (-) ≠ Weygang
  • (-) = Applied Modeling and Quantitative Methods
  • (-) ≠ Mathematics
  • (-) ≠ Kang, Shengnan

Filter Results


2011 - 2031
Specify date range: Show
Format: 2021/10/28