Database Management System Big data tools and techniques

Post New Homework

Database Management System Big data tools and techniques Assignment -

Assignment must be done with "VM WORKSTATION 14 PLAYER" by connecting to given pen drive from the university since I cannot give you the pen drive in personal I have copy pasted the contents in the folder. I would be writing down the steps to connect to the cloudera software stored in the pen drive.

STEPS:

1) Open VMWARE WORKSTATION PRO 14 PLAYER

2) Click on open virtual machine

3) Open the folder where you have copied the pen drive contents

4) Select cloudera -Training - CAPSark-student-vm folder

5) Select-cloudera-training-capspark-student-rev-dh5.4.3a

6) Click on play virtual machine

Learning outcomes of this assessment - The learning outcomes covered by this assignment are:

Provide a broad overview of the general field of 'big data systems'

Developing specialised knowledge in areas that demonstrate the interaction and synergy between ongoing research and practical deployment of this field of study.

Key skills to be assessed -

This assignments aims at assessing your skills in:

The usage of common big data tools and techniques

Your ability to implement a standard data analysis process

  • Loading the data
  • Cleansing the data
  • Analysis
  • Visualisation / Reporting

Use of Python, SQL and Linux terminal commands

Task - You will be given a dataset and a set of problem statements. Where possible (you will need to carefully explain any reasons for not supplying both solutions), you are required implement the solution in both SQL (using either Hive or Impala), and Spark (using pyspark or spark-shell).

General instructions

You will follow a typical data analysis process:

1. Load / ingest the data to be analysed

2. Prepare / clean the data

3. Analyse the data

4. Visualise results / generate report

For steps 1, 2 and 3 you will use the virtual machine (and the software installed on it) that has been provided as part of this module. The data necessary for this assignment will be provided in a MySQL dump format which you will need to copy onto the virtual machine and start working with it from there.

The virtual machine has a MySQL server running and you will need to load the data into the MySQL server. From there you will be required to use Sqoop to get the data into Hadoop.

For the cleansing, preparation and analysis you will implement the solution twice (where possible). First in SQL using either Hive or Impala and then in Spark using either pyspark or spark-shell.

For the visualisation of the results you are free to use any tool that fulfils the requirements, which can be tools you have learned about such as Python's matplotlib, SAS or Qlik, or any other free open source tool you may find suitable.

Extra features to be implemented

To get more than a "Satisfactory" mark, a number of extra features should be implemented. Features include, but are not limited to:

Creation of a single script that executes the entire process of loading the supplied data to exporting the result data required for visualisation.

  • The Spark implementation is done in Scala as opposed to Python.

Usage of parametrised scripts which allows you to pass parameters to the queries to dynamically set data selection criteria. For instance, passing datetime parameters to select tweets in that time period.

Plotting of extra graphs visualising the discovery of useful information based on your own exploration which is not covered by the other problem statements.

  • Extraction of statistical information from the data.
  • The usage of file formats other than plain text.

The data

You will be given a dataset containing simplified Twitter data pertaining to a number of football games. The dataset will be supplied in compressed format and will be made available online for download or can be supplied by USB memory stick. Further information regarding each game, including the teams playing and their official hashtags, start and end times, as well as the times of any goals, will also be provided.

Problem statements

You are a data analyst / data scientist working for an event security company who monitor real time events to analyse the level of potential disturbance. In order to asses commotion at an event, they monitor the Twitter feeds pertaining to the event. They would like answers to the following questions (in all the following, you should consider the half time and overtime as 'during-game')..

Questions / problem statements:

1. Extract and present the average number of tweets per 'during-game' minute for the top 10 (i.e. most tweeted about during the event) games.

2. Rank the games according to number of distinct users tweeting 'during-game' and present the infor- mation for the top 10 games, including the number of distinct users for each.

3. Find the top 3 teams that played in the most games. Rank their games in order of highest number of 'during-game' tweets (include the frequency in your output).

4. Find the top 10 (ordered by number of tweets) games which have the highest 'during-game' tweeting spike in the last 10 minutes of the game.

5. As well as the official hashtags, each tweet may be labelled with other hashtags. Restricting the data to 'during-game' tweets, list the top 10 most common non official hashtags over the whole dataset with their frequencies.

6. Draw the graph of the progress of one of the games (the game you choose should have a complete set of tweets for the entire duration of the game). It may be useful to summarize the tweet frequencies in 1 minute intervals.

Report - A 4000-5000 word report that documents your solution.

Attachment:- Assignment Files.rar

Post New Homework
Captcha

Looking tutor’s service for getting help in UK studies or college assignments? Order Now