Design and implement solutions to a range of computer

Post New Homework

CMP9135M Computer Vision - University of Lincoln

Description of Assessment Task and Purpose:

Learning Outcome 1: Critically evaluate and apply the theories, algorithms, techniques and methodologies involved in computer vision.

Learning Outcome 2: Design and implement solutions to a range of computer vision applications and problems, and evaluate their effectiveness.

Requirements:
This assessment comprises three assessed tasks, as detailed in the following page.

1. Image segmentation and detection. Weight: 40% of this component
2. Feature calculation. Weight: 30% of this component
3. Object tracking. Weight: 30% of this component

Task 1: Image Segmentation and Detection

Download and unzip the file ‘skin lesion dataset.zip' from Blackboard. You should obtain a set of 120 images. Among those images, there are 60 skin lesion colour images and 60 corresponding binary masks (ground-truth segmentation).

Please use image processing techniques to implement the following tasks. Please note that you are encouraged to develop one model with same parameter settings for all the images.

Task 1: Object segmentation. For each skin lesion image, please use image processing techniques to automatically segment lesion object. Examples of the lesion image (Fig.1(a) and the segmented lesion (Fig.1(b)) are shown in Figure 1.

Task 2: Segmentation evaluation. For each skin lesion image, calculate the Dice Similarity Score (DS) which is defined in Equation 1; where M is the segmented lesion mask obtained from Task 1, and S is the corresponding ground-truth binary mask.

DS = 2|M∩S|/|M|+|S| (1)

The calculated DS shall be between 0 and 1. For example, DS is 1 if your segmentation matches perfectly with the ground-truth mask, whist DS is 0 if there is no overlap between your segmentation and ground-truth mask.

749_Skin Lesion Segmentation.jpg

Figure 1. Skin Lesion Segmentation

Your report should include: 1) For three skin images (ISIC_0000019, ISIC_0000095 and ISIC_0000214), you are required to put the original images, final segmented lesion binary images, the calculated DS value for each of the three images; 2) for all the 60 skin images, please provide a bar graph with x-axis representing the number of the image, and y-axis representing the corresponding DS. 3) Calculate the mean and standard deviation of the DS for all the 60 images.

4) briefly describe and justify the implementation steps.

Task 2: Feature Calculation

Download the Image (‘ImgPIA.jpeg') from Blackboard. This part of the assignment will deal with the area of Feature Extraction, in both the Frequency and Spatial domains.

Task 1: Read the image (‘ImgPIA.jpeg'), and select the features for both radius and direction as described in the Spectral Approach session of the Feature Extraction lecture. For additional marks you can change the values of radius and angle, and present those values in a plot or table.

Task 2: Read the image (ImgPIA.jpeg), and select features from the image histogram (i.e. 1st order), at least six (6) features from the co-occurance matrix (the original paper by Haralick has also made available to you), and at least five (5) features from the Gray Level Run Length (GLRL) matrix. Please note that both the co-occurance and GLRL based features can be directional and as a function of distance between pixel co-ordinates. For additional marks you can change the bit-depth of the image (i.e. 8, 6, 4 bit), and recalculate the features presenting them as a plot or table.

For both tasks analysis and discussion of your findings is expected.

Task 3: Object Tracking

Download from Blackboard the data files 'x.csv' and 'y.csv', which contain the real coordinates [x,y] of a moving target, and the files 'a.csv' and 'b.csv', which contain their noisy version [a,b] provided by a generic video detector (e.g. frame-to-frame image segmentation of the target).

Implement a Kalman filter with a software application that accepts as input the noisy coordinates [a,b] and produces as output the estimated coordinates [x*,y*]. For this, you should use a Constant Velocity motion model F with constant time intervals Δt = 0.1 and a Cartesian observation model H. The covariance matrices Q and R of the respective noises are the following:

2108_Skin Lesion Segmentation1.jpg

 

1) You should plot the estimated trajectory of coordinates [x*,y*], together with the real [x,y] and the noisy ones [a,b] for comparison.

2) You should also assess the quality of the tracking by calculating the mean and standard deviation of the absolute error and the Root Mean Squared error (i.e. compare both noisy and estimated coordinates to the ground truth).

CMP9137M Advanced Machine Learning - University of Lincoln

Assessment Task and Purpose:

This assessment comprises two tasks on machine learning as explained in section "format for assessment" (below). Your submission should include a concise report of maximum 6 pages using font sizes 11 and excluding cover sheet, references and appendixes. The report should describe your proposed solutions on both tasks, it should include a set of relevant references from the literature, and it should include the source code of your solutions as an appendix.

Learning Outcome 1: Critically appraise a range of machine learning techniques, identifying their strengths and weaknesses, and electing appropriate methods to serve particular roles.
Learning Outcome 2: Analyse the "state of the art" in machine learning, including an understanding of current applications.
Learning Outcome 3: Use machine learning software to solve complex real-world problems in an application domain of interest.

TASK 1:
You are required to use Machine Learning techniques to tackle the problem of "Detection of Pneumonia in Medical Images". According to NHS records there were 272 thousand hospitalisations of Pneumonia in England in 2019. In the USA, it is one of the top 10 causes of death Diagnosing Pneumonia requires careful analyses of chest radiographs by highly trained specialists exposed to large amounts of images every day. Solutions to automate early diagnoses would help in diagnosing such a disease. This task consists of creating image classifiers to predict whether there is pneumonia (see image on the right) or not (see image on the left) in an input image.

The dataset used in this task is from the following Kaggle competition:

You are expected to explore a range of machine learning classifiers, inspired by the various models and categories explored within the module and beyond (i.e. from reading and literature). At least two of the deep learning classifiers discussed in the lectures and/or workshops should be included as baselines. In addition, at least one of your proposed classifiers should attempt to go beyond the module in terms of architectural, approach, and/or algorithmic details.

You will then investigate their performance, compare and critique them to justify your recommended classifier(s). This should include metrics such as TP/FP rates, Precision-Recall, F-measure, and any other relevant metrics. In this assignment you are free to train any classifier, to do any pre-processing of the data, and to implement your own algorithm(s) instead of only using libraries. While you are encouraged to make your own implementations, you can use libraries (such as Tensorflow or Pytorch) to train your your deep neural networks. But you should clearly mention your resources, acknowledge appropriately, and compare between classifiers and their results in your report.

TASK 2:
You are required to use Machine Learning to tackle the problem of "Game Learning". Your goal in this task is to train Deep Reinforcement Learning (DRL) agents that receive image-inputs from a game simulator, and that output game actions to play the game autonomously. The following simulator will be used to play the game of SuperMarioBros 1-1-v0:

You are required to use your knowledge acquired in the module regarding DRL agents, and knowledge acquired from additional recommended readings. This will be useful for investigating the performance of those agents, and for comparing and criticising them so you can recommend your best agent. You are expected to evaluate your agents using metrics sush as Avg. Reward, Avg. Q-Value, Avg. Game Score, Avg. Steps Per Episode, and Training and Test Times.

You are expected to train at least three different agents (in addition to any baseline provided in the module), which can differ in their state representation (CNN, CNN-RNN, CNN-Transformer) and/or different learning algorithms. Once you have decided the agents that you want to report, you should train them with three different seeds and average their results. If you report learning curves, they should be based on those average results instead of using a single seed (run). You are expected to justify your choices in terms of architectures, hyperparameters and algorithms.

In this assignment, you are free to train any DRL agent, in any programming language, to pre- process the data, and to implement your own solutions whenever possible. While you are free to use libraries, you should not use fully available solutions. So please mention your resources used, acknowledge appropriately, and compare between agents in your report.

CMP9785M Cloud Development - University of Lincoln

Assessment Task and Purpose:

Learning Outcome 1: Critically evaluate and compare cloud-native application design to standard monolithic development practices;

Learning Outcome 2: Design and develop a secure, scalable cloud native application using a range of core services as part of a cloud systems development lifecycle;

Learning Outcome 3: Implement DevOps practices for continuous integration/continuous delivery and testing strategies;

Overview
Your task for this assessment is to design, develop, and deploy a full-stack IoT cloud application using a range of cloud services. The IoT application theme can be chosen by you, and use the simulated device console app to send data to your cloud services. The application should include the use of Azure's IoT service stack as well as DevOps testing strategies. The assessment is in two parts, the first of which is a written report to document the process of developing the full-stack cloud application, with the second the development work.

Part 1 - Report
Using the Azure cloud vendor as directed by the delivery team, you will tap into the knowledge and skills gained in the module to research and select the cloud services you will use for your IoT application solution. You will document this research in a short report that includes a cloud architecture diagram and a list of the cloud services for your solution. The report should be 3000 words maximum and include the below sections:
Introduction - The theme and scope of your full-stack cloud application (~500 words)
Cloud Architecture Diagram - A visual representation of your selected cloud services, highlighting interconnectivity between services
Cloud Services - A list of the cloud services in your diagram with a discussion on their purpose and use (~1000 words)
Development - A discussion on the development challenges you faced and how you addressed them (~1000 words)
DevOpS/Testing- A discussion on the DevOps/Testing strategy used for the application (~500 words) References - A list of supporting academic and official vendor documentation references.

Part 2 - Development
You should develop and deploy the full stack cloud IoT application as presented in your written report. Essentially, you should use your cloud architecture design diagram as the blueprint to develop and deploy your application. You will use the same cloud vendor platform as taught throughout the module. You are not permitted to develop for any other cloud platform.
The application must include the following services, please note where these are run and deployed:
• Azure IoT Hub (running in the cloud)
• Cosmos DB (running in the cloud)
• Azure Functions (running locally in VSC)
• Azure App Service (running locally in VSC)
You must also modify and use the simulated device console app to generate sensor data for your cloud services, specifically for sending to the Azure IoT hub. You can choose the type of sensor data you wish to send to your cloud services to align with your application by modifying the code of the simulated device app.

Attachment:- Computer Vision.rar

Attachment:- Cloud Development Assessment.rar

Post New Homework
Captcha

Looking tutor’s service for getting help in UK studies or college assignments? Order Now