Data Science Semester-1 (ICT603) Assignment Help
Assessment Overview
Assessment tasks |
|||||
Assessment ID |
Assessment Item |
When due |
Weighting |
ULO# |
CLO# for MITS |
1 |
Report – Statistical Analysis of Business Data (Individual) (1000 Words) |
Session 6 |
30% |
1, 2 |
1, 2 |
2 |
Data Acquisition and Data Mining (Group) Part A – Report (1000 Words) Part B – Presentations |
Part A – Session 9 Part B – Session 10 |
Part A – 20% Part B – 10% Total – 30% |
1, 3 ,4 |
1, 2, 3 |
3 * |
Data Modelling Project (Group) Part A – Report (1500 Words) Part B – Presentations |
Part A – Session 13 (Study Week) Part B – Session 14 (Exam Week) |
Part A – 30% Part B – 10% Total – 40% |
4, 5 |
1, 2, 3, 4, 5 |
Note: * denotes ‘Hurdle
Assessment Item’ that students must achieve at least 40% in this item to pass the unit.
Referencing guides
You must reference all the sources of information you have used in your assessments. Please use the IEEE referencing style when referencing in your assessments in this unit. Refer to the library’s referencing guides for more information.
Academic misconduct
VIT enforces that the integrity of its students’ academic studies follows an acceptable level of excellence. VIT will adhere to its VIT Policies, Procedures and Forms where it explains the importance of staff and student honesty in relation to academic work. It outlines the kinds of behaviours that are “academic misconduct”, including plagiarism.
Late submissions
In cases where there are no accepted mitigating circumstances as determined through VIT Policies, Procedures and Forms, late submission of assessments will lead automatically to the imposition of a penalty. Penalties will be applied as soon as the deadline is reached.
Short extensions and special consideration
Special Consideration is a request for:
• Extensions of the due date for an assessment, other than an examination (e.g. assignment extension).
• Special Consideration (Special Consideration in relation to a Completed assessment, including an end-of-unit Examination).
Students wishing to request Special Consideration in relation to an assessment the due date of which has not yet passed must engage in written emails to the teaching team to Request for Special Consideration as early as possible and prior to start time of the assessment due date, along with any accompanying documents, such as medical certificates.
For more information, visit VIT Policies, Procedures and Forms.
Inclusive and equitable assessment
Reasonable adjustment in assessment methods will be made to accommodate students with a documented disability or impairment. Contact the unit teaching team for more information.
Contract Cheating
Contract cheating usually involves the purchase of an assignment or piece of research from another party. This may be facilitated by a fellow student, friend or purchased on a website. Other forms of contract cheating include paying another person to sit an exam in the student’s place. Contract cheating warning:
• By paying someone else to complete your academic work, you don’t learn as much as you could have if you did the work yourself.
• You are not prepared forthe demands of your future employment.
• You could be found guilty of academic misconduct.
• Many of for pay contract cheating companies recycle assignments despite guarantees of “original, plagiarism-free work” so similarity is easily detected by TurnitIn.
• Penaltiesfor academic misconduct include suspension and exclusion.
• Students in some disciplines are required to disclose any findings of guilt for academic misconduct before being accepted into certain professions (e.g., law).
• You might disclose your personal and financial information in an unsafe way, leaving yourself open to many risks including possible identity theft.
• You also leave yourself open to blackmail – if you pay someone else to do an assignment for you, they know you have engaged in fraudulent behaviour and can always blackmail you.
Grades
We determine your gradesto the following Grading Scheme:
Grade |
Percentage |
A |
80% – 100% |
B |
70% – 79% |
C |
60% – 69% |
D |
50% – 59% |
F |
0% – 49% |
Assessment Details for Assessment Item 1: Report – Statistical Analysis of Business Data
Assessment tasks |
|||||
Assessment ID |
Assessment Item |
When due |
Weighting |
ULO# |
CLO# for MITS |
1 |
Report – Statistical Analysis of Business Data (Individual) (1000 Words) |
Session 6 |
30% |
1, 2 |
1, 2 |
Objective
This assessment item relates to the unit learning outcomes as in the unit descriptor. This assessment is designed to give students experience in analyzing a suitable dataset and creating different visualizations in dashboard and to improve student presentation skills relevant to the Unit of Study subject matter.
Case Study:
You are a data scientist hired by a retail company, “SmartMart,” which operates a chain of grocery stores. SmartMart has been in the market for several years and has a significant customer base. However, the company is facing challenges in optimizing its operations and maximizing profits. As a data scientist, your task is to analyze the provided dataset and identify areas where data science techniques can be applied to create business value for SmartMart.
Dataset:
The dataset provided contains information on SmartMart’s sales transactions over the past year. It includes data such as:
• Date and time of each transaction
• Customer ID
• Product ID
• Quantity sold
• Unit price
• Total transaction amount
• Store ID
Tasks:
Apply appropriate statistical analysis techniques to extract valuable information from the dataset. This may include but is not limited to:
a. Descriptive statistics
b. Correlation analysis
c. Hypothesis testing
d. Time-series analysis
➢ Identify key findings and insights from your analysis that can help SmartMart make data-driven decisions to optimize its operations and increase profitability. ➢ Present your analysis results in a clear and concise manner, including visualizations and explanations where necessary.
➢ Provide recommendations on specific strategies or actions that SmartMart can take based on your analysis.
Deliverables:
1. Written report documenting your analysis process, findings, and recommendations containing Python code/scripts used for data analysis, along with comments explaining the code logic and methodology and also Visualizations (e.g., plots, charts) supporting your analysis and findings. Note:
Please provide a single report that includes screenshots of Python code along with corresponding results, as well as screenshots of visualizations that supports your analysis.
Dataset:
Use the below program to generate dataset with 1000 rows and following 7 columns.
➢ Customer ID
➢ Product ID
➢ Quantity sold
➢ Unit price
➢ Total transaction amount
➢ Store ID
import pandas as pd
import numpy as np
import random
from datetime import datetime, timedelta
# Generate 1000 random dates and times within a specific range
start_date = datetime(2023, 1, 1)
end_date = datetime(2023, 12, 31)
date_times = [start_date + timedelta(seconds=random.randint(0, int((end_date – start_date).total_seconds()))) for _ in range(1000)]
# Generate random customer IDs
customer_ids = [‘C’ + str(i).zfill(4) for i in range(1, 1001)]
# Generate random product IDs
product_ids = [‘P’ + str(i).zfill(3) for i in range(1, 101)]
# Generate random quantities sold
quantities_sold = np.random.randint(1, 10, size=1000)
# Generate random unit prices
unit_prices = np.random.uniform(1, 100, size=1000)
# Calculate total transaction amounts
total_transaction_amounts = quantities_sold * unit_prices
# Generate random store IDs
store_ids = [‘S’ + str(i).zfill(3) for i in range(1, 11)]
# Randomly assign store IDs to transactions
store_ids = [random.choice(store_ids) for _ in range(1000)]
# Create DataFrame
data = {
‘Date & Time’: date_times,
‘Customer ID’: random.choices(customer_ids, k=1000),
‘Product ID’: random.choices(product_ids, k=1000),
‘Quantity Sold’: quantities_sold,
‘Unit Price’: unit_prices,
‘Total Transaction Amount’: total_transaction_amounts,}
df = pd.DataFrame(data)
# Convert Date & Time column to datetime format
df[‘Date & Time’] = pd.to_datetime(df[‘Date & Time’])
# Sort DataFrame by Date & Time
df = df.sort_values(by=’Date & Time’)
# Reset index
df.reset_index(drop=True, inplace=True)
# Print DataFrame
print(df)
Submission Instructions
All submissions are to be submitted through the assignment 1 Drop-boxes that will be set up in the Moodle account for this Unit of Study. Assignments not submitted through these drop boxes will not be considered. Submissions must be made by the due date and time (which will be in the session detailed above) and determined by your Unit coordinator
Note: All work is due by the due date and time. Late submissions will be penalized at 20% of the assessment final grade per day, including weekends.
Marking Criteria/Rubric
You will be assessed on the following marking criteria/Rubric:
Total Marks: 30
Assessment criteria |
Professional (80%-100%) |
Very Good (70%-79%) |
Good (60%-69%) |
Satisfactory (50%- 59%) |
Unsatisfactory (0%- 49%) |
Analysis Process and Methodology |
The analysis process is meticulously documented, including a thorough explanation of the chosen statistical techniques and their relevance to the dataset. The methodology is clear, logical, and well supported. |
The analysis process is well documented, with clear explanations of the chosen statistical techniques. The methodology is generally logical and supported by relevant reasoning. |
The analysis process is adequately documented, but there may be some gaps in explaining the chosen statistical techniques. The methodology is somewhat clear but may lack depth or coherence in reasoning. |
The analysis process is somewhat documented, with limited explanations of the chosen statistical techniques. The methodology is vague or lacks clarity in reasoning. |
The analysis process is poorly documented, with minimal explanations of the chosen statistical techniques. The methodology is unclear or absent. |
Findings and Insights |
Identifies key findings and insights with exceptional clarity and depth, providing valuable and actionable insights for SmartMart’s decision making process. |
Presents clear and insightful findings, demonstrating a strong understanding of the dataset and its implications for SmartMart’s operations. |
Identifies basic findings and insights, but may lack depth or clarity in analysis, resulting in somewhat limited actionable insights. |
Presents limited findings and insights, with some relevance to SmartMart’s operations, but lacks depth or clear connections to the dataset. |
Fails to identify meaningful findings or insights, with little relevance to SmartMart’s operations. |
Presentation and Clarity |
The report is exceptionally clear, well-organized, and effectively communicates the analysis results and recommendations. Visualizations are highly |
The report is well structured and effectively communicates the analysis results and recommendations. Visualizations are clear |
The report is adequately structured and communicates the analysis results and recommendations with some clarity. Visualizations may be somewhat unclear or lacking |
The report lacks clear structure and may be difficult to follow. Communication of analysis results and recommendations is |
The report is poorly structured and difficult to follow. Communication of analysis results |
effective and support the analysis. |
and relevant. |
in relevance. |
somewhat unclear. Visualizations are limited or ineffective. |
and recommendation s is unclear or absent. Visualizations are missing or irrelevant. |
|
Python Code/Scripts |
Python code/scripts are well documented, clear, and demonstrate advanced proficiency in data analysis techniques. Comments thoroughly explain code logic and methodology. |
Python code/scripts are well-structured and demonstrate proficiency in data analysis techniques. Comments provide adequate explanations of code logic and methodology. |
Python code/scripts are adequately structured and demonstrate basic proficiency in data analysis techniques. Comments may lack depth or clarity in explaining code logic and methodology. |
Python code/scripts are somewhat disorganized or lack clarity in structure. Demonstrates limited proficiency in data analysis techniques. Comments may be sparse or unclear. |
Python code/scripts are poorly structured or lack clarity. Demonstrates minimal proficiency in data analysis techniques. Comments are absent or insufficient. |
Recommendations |
Provides detailed and actionable recommendations based on the analysis findings, demonstrating a deep understanding of SmartMart’s business needs and potential strategies for improvement. |
Offers clear and relevant recommendations based on the analysis findings, addressing SmartMart’s business needs and suggesting potential strategies for improvement. |
Provides basic recommendations based on the analysis findings, but may lack depth or specificity in addressing SmartMart’s business needs. |
Offers limited recommendations based on the analysis findings, with minimal relevance to SmartMart’s business needs or strategies for improvement. |
Fails to provide meaningful recommendation s based on the analysis findings, with little relevance to SmartMart’s business needs or strategies for improvement. |
Assessment Details for Assessment Item 2: Data Acquisition and Data Mining (Group)
Part A – Report and Part B- Oral Presentation
Overview
Assessment tasks |
|||||
Assessment ID |
Assessment Item |
When due |
Weighting |
ULO# |
CLO# for MITS |
2 |
Data Acquisition and Data Mining (Group) Part A – Report (1000 Words) Part B – Presentations |
Part A – Session 9 Part B – Session 10 |
Part A – 20% Part B – 10% Total – 30% |
1, 3 ,4 |
1, 2, 3 |
Assignment Overview:
In this assignment, you will work in a group of 3 to 5 students to conduct an Exploratory Data Analysis (EDA) on a comprehensive dataset. The dataset can be acquired from internal or external sources, or by merging both. You will utilize appropriate techniques, tools, and programming languages, such as Python, to perform various data procedures including data acquisition, data wrangling, and data mining to extract meaningful insights from the dataset. The final deliverables will include an EDA report and an oral presentation video to showcase your findings and analysis.
Assignment Tasks:
1. Data Acquisition:
➢ Identify and acquire a comprehensive dataset suitable for the EDA. You can choose from the suggested data sources provided or explore and select different datasets based on your group’s common interest.
➢ Ensure the dataset is relevant, sufficiently large, and contains multiple variables for thorough analysis.
Example Data Sources:
1. Kaggle Datasets (https://www.kaggle.com/datasets)
2. UCI Machine Learning Repository (https://archive.ics.uci.edu/ml/index.php)
3. Government Open Data Portals (e.g., data.gov)
4. Academic Research Databases (e.g., PubMed, IEEE Xplore)
5. Social Media APIs (e.g., Twitter, Facebook)
2. Data Wrangling:
➢ Preprocess the acquired dataset to handle missing values, outliers, and inconsistencies.
➢ Perform data cleaning tasks such as removing duplicates, standardizing formats, and transforming variables if necessary. ➢ Explore methods to handle categorical variables and convert them into a suitable format for analysis.
Note: It is mandatory that Data Wrangling operations should be incorporate in the dataset.
3. Data Exploration:
➢ Conduct initial data exploration to understand the structure, distributions, and relationships within the dataset.
➢ Utilize descriptive statistics and visualization techniques (e.g., histograms, box plots, scatter plots) to gain insights into individual variables and their interactions.
➢ Identify any patterns, trends, or anomalies present in the data.
4. Data Mining and Analysis:
➢ Apply appropriate data mining techniques such as clustering, classification, or regression to uncover deeper insights within the dataset. ➢ Utilize machine learning algorithms if applicable to predict or classify certain outcomes based on the available variables. ➢ Perform feature engineering if necessary to enhance the predictive power of the model.
5. EDA Report:
➢ Compile all findings, analysis, and visualizations into a comprehensive EDA report.
➢ Structure the report to include an introduction, methodology, results, discussion, and conclusion sections.
➢ Provide clear explanations for the steps taken, insights gained, and any challenges encountered during the analysis.
➢ Include visualizations and summary statistics to support your findings.
6. Oral Presentation:
➢ Prepare a concise oral presentation to present your EDA findings to the class.
➢ Highlight key insights, trends, and interesting observations discovered during the analysis.
➢ Use visual aids such as slides or interactive dashboards to enhance the presentation.
Submission Guidelines:
➢ The EDA report of 1000 words must be submitted digitally, either in PDF or Word document format. The report should include an appendix at the end containing screenshots of the Python code along with its corresponding output
➢ The oral presentation can be delivered using presentation software (e.g., PowerPoint, Google Slides).
➢ Ensure proper citation and referencing for any external sources or datasets used.
➢ Please submit two files, the Report and the Oral Presentation, through the link provided in the LMS before the specified deadline.
Note: Collaboration within the group is encouraged, but each group member must contribute substantially to the analysis, report writing, and presentation. Plagiarism or unauthorized use of external sources will result in penalties.
Submission Instructions
All submissions are to be submitted through turn-it-in. Drop-boxes linked to turn-it-in will be set up in the Unit of Study Moodle account. Assignments not submitted through these drop-boxes will not be considered.
Submissions must be made by the due date and time (which will be in the session detailed above) and determined by your Unit coordinator. Submissions made after the due date and time will be penalized at the rate of 20% per day (including weekend days).
The turn-it-in similarity score will be used in determining the level if any of plagiarism. Turn-it-in will check conference websites, Journal articles, the Web and your own class member submissions for plagiarism. You can see your turn-it-in similarity score when you submit your assignment to the appropriate drop-box. If this is a concern you will have a chance to change your assignment and re-submit. However, re-submission is only allowed prior to the submission due date and time. After the due date and time have elapsed you cannot make re-submissions and you will have to live with the similarity score as there will be no chance for changing. Thus, plan early and submit early to take advantage of this feature. You can make multiple submissions, but please remember we only see the last submission, and the date and time you submitted will be taken from that submission.
Your document should be a single word or pdf document containing your report
Note: All work is due by the due date and time. Late submissions will be penalized at 20% of the assessment final grade per day, including weekends.
Marking Criteria/Rubric
You will be assessed on the following marking criteria/Rubric:
Total Marks: 30
Assessment criteria |
Professional (80%-100%) Very Good (70%-79%) |
Good (60%-69%) |
Satisfactory (50%- 59%) |
Unsatisfactory (0%- 49%) |
|
Data Acquisition |
Group acquires a highly relevant and comprehensive dataset from a diverse range of sources, ensuring it contains multiple variables for thorough analysis. |
Group acquires a relevant dataset with multiple variables suitable for analysis, demonstrating good selection from suggested or alternative sources. |
Group acquires a dataset, but it may lack depth or relevance in some areas, or may not contain a sufficient number of variables for thorough analysis. |
Group acquires a dataset, but it may lack relevance or contain limited variables for analysis. |
Group fails to acquire an appropriate dataset, lacking relevance, depth, or variables necessary for analysis. |
Data Wrangling |
Comprehensive data wrangling techniques are applied effectively, addressing missing values, outliers, inconsistencies, and categorical variables. Operations are well documented and integrated seamlessly into the dataset. |
Data wrangling operations are performed proficiently, addressing most missing values, outliers, inconsistencies, and categorical variables, with adequate documentation. |
Data wrangling operations are attempted but may lack completeness or documentation, with some issues remaining unresolved. |
Data wrangling efforts are minimal, leaving significant issues unaddressed, with little to no documentation provided. |
Little to no attempt is made to perform data wrangling operations, resulting in unresolved issues and inconsistencies in the dataset. |
Data Exploration |
Extensive data exploration is conducted, utilizing a wide range of descriptive statistics and visualization techniques effectively to gain deep insights into the dataset’s structure, distributions, and relationships. Patterns, trends, and anomalies are identified comprehensively. |
Data exploration is conducted proficiently, utilizing descriptive statistics and visualization techniques to gain insights into the dataset’s structure, distributions, and relationships. Some patterns, trends, and anomalies are identified. |
Basic data exploration is conducted, with limited utilization of descriptive statistics and visualization techniques to understand the dataset’s structure, distributions, and relationships. Some patterns or trends may be overlooked. |
Limited data exploration is conducted, with minimal use of descriptive statistics and visualization techniques, resulting in shallow insights into the dataset’s structure, distributions, and relationships. Important patterns or trends may be missed. |
Little to no data exploration is conducted, resulting in a lack of understanding of the dataset’s structure, distributions, and relationships. Important patterns or trends are not identified. |
Data Mining and Analysis |
Advanced data mining techniques are applied effectively, utilizing appropriate algorithms to uncover deep insights within the dataset. Machine learning algorithms are implemented where applicable, demonstrating advanced analytical skills. Feature engineering, if necessary, is performed proficiently to enhance the predictive power of the model. |
Data mining techniques are applied proficiently, utilizing appropriate algorithms to uncover insights within the dataset. Machine learning algorithms may be applied with moderate success, demonstrating solid analytical skills. Some attempts at feature engineering may be made. |
Basic data mining techniques are applied, but with limited effectiveness in uncovering insights within the dataset. Machine learning algorithms, if applied, may lack sophistication, with minimal attempts at feature engineering. |
Limited data mining techniques are applied, with little effectiveness in uncovering insights within the dataset. Machine learning algorithms, if applied, are rudimentary, with no attempts at feature engineering. |
Little to no attempt is made to apply data mining techniques, resulting in a lack of insights within the dataset. Machine learning algorithms are not utilized, and no attempts at feature engineering are made. |
EDA Report |
A comprehensive EDA report is compiled, containing detailed findings, analysis, and visualizations. The report is well-structured with clear sections, providing thorough explanations for the steps taken, insights gained, and challenges encountered during the analysis. Visualizations and summary statistics effectively support the findings. |
An EDA report is compiled proficiently, containing findings, analysis, and visualizations. The report is adequately structured with clear sections, providing explanations for the steps taken, insights gained, and challenges encountered during the analysis. Visualizations and summary statistics support the findings adequately. |
A basic EDA report is compiled, containing some findings, analysis, and visualizations. The report may lack cohesion or depth in some areas, with limited
explanations provided for the steps taken, insights gained, and challenges encountered during the analysis. Visualizations and summary statistics may be insufficient. |
A rudimentary EDA report is compiled, containing limited findings, analysis, and visualizations. The report lacks structure and depth, with minimal explanations provided for the steps taken, insights gained, and challenges encountered during the analysis. Visualizations and summary statistics are lacking or ineffective. |
Little to no attempt is made to compile an EDA report, resulting in a lack of findings, analysis, and visualizations. The report is incomplete or missing key sections, with no explanations provided for the steps taken, insights gained, or challenges encountered during the analysis. |
Oral Presentation |
A concise oral presentation is prepared, effectively presenting EDA findings to the audience. Key insights, trends, and observations are highlighted clearly, supported by visual aids such as slides or interactive dashboards. Presentation delivery is engaging and demonstrates strong communication skills. |
An oral presentation is prepared proficiently, presenting EDA findings clearly to the audience. Key insights, trends, and observations are highlighted adequately, supported by visual aids such as slides or interactive dashboards. Presentation delivery is engaging and demonstrates good communication skills. |
A basic oral presentation is prepared, presenting EDA findings with some clarity to the audience. Key insights, trends, and observations may be overlooked or presented less effectively, with visual aids such as slides or interactive dashboards used minimally. Presentation delivery may lack engagement or coherence. |
A rudimentary oral presentation is prepared, lacking clarity in presenting EDA findings to the audience. Key insights, trends, and observations are poorly highlighted, with minimal use of visual aids such as slides or interactive dashboards. Presentation delivery lacks engagement and coherence. |
Little to no attempt is made to prepare an oral presentation, resulting in a lack of clarity in presenting EDA findings to the audience. Key insights, trends, and observations are not highlighted effectively, with no visual aids used. Presentation delivery lacks engagement and coherence. |
Assessment Details for Assessment Item 3: Data Modelling Project (Group) Part A – Report (1500 Words) and Part B – Presentations
Overview
Assessment tasks |
|||||
Assessment ID |
Assessment Item |
When due |
Weighting |
ULO# |
CLO# for MITS |
3 * |
Data Modelling Project (Group) Part A – Report (1500 Words) Part B – Presentations |
Part A – Session 13 (Study Week) Part B – Session 14 (Exam Week) |
Part A – 30% Part B – 10% Total – 40% |
4, 5 |
1, 2, 3, 4, 5 |
Assignment Overview:
In this assignment, you will work in a group of 3 to 5 students. In this group assessment, you will collaborate with your team members to produce a comprehensive final report summarizing the achievements of credit analysis dataset, the process of building data model(s) to fit the dataset and conducting data analysis. You will also address how the results are validated and interpreted, and provide insights and recommendations derived from your analysis. Additionally, ethical and social issues related to the project must be thoroughly addressed. You will utilize appropriate tools and languages, such as Python and Tableau, to complete this task. Your group will be required to submit a report and deliver an oral presentation.
Creating Dataset:
Use the below program to generate credit analysis dataset with 5000 customer information.
import pandas as pd
import numpy as np
import random
# Set seed for reproducibility
random.seed(42)
# Generate sample data
num_samples = 5000
# Sample customer IDs
customer_ids = [‘C’ + str(i).zfill(4) for i in range(1, num_samples + 1)]
# Sample credit scores (ranging from 300 to 850)
credit_scores = [random.randint(300, 850) for _ in range(num_samples)]
# Sample ages (ranging from 18 to 80)
ages = [random.randint(18, 80) for _ in range(num_samples)]
# Sample income (ranging from 20000 to 200000)
income = [random.randint(20000, 200000) for _ in range(num_samples)]
# Sample loan amounts (ranging from 1000 to 100000)
loan_amounts = [random.randint(1000, 100000) for _ in range(num_samples)]
# Introduce missing values for loan amounts
missing_indices = random.sample(range(num_samples), int(0.05*num_samples)) # 5% missing values for index in missing_indices:
loan_amounts[index] = np.nan
# Sample loan durations (ranging from 1 to 60 months)
loan_durations = [random.randint(1, 60) for _ in range(num_samples)]
# Introduce outliers for loan durations
outlier_indices = random.sample(range(num_samples), int(0.02*num_samples)) # 2% outliers for index in outlier_indices:
Page | 18
Victorian Institute of Technology CRICOS Provider No. 02044E, RTO No: 20829
loan_durations[index] = random.randint(120, 240) # Outliers ranging from 10 to 20 years
# Sample loan types
loan_types = [‘Personal Loan’, ‘Car Loan’, ‘Home Loan’, ‘Education Loan’]
loan_purposes = [random.choice(loan_types) for _ in range(num_samples)]
# Sample employment status
employment_status = [‘Employed’, ‘Unemployed’, ‘Self-Employed’]
employment = [random.choice(employment_status) for _ in range(num_samples)]
# Sample default status
default_status = [random.choice([True, False]) for _ in range(num_samples)]
# Create DataFrame
data = pd.DataFrame({
‘CustomerID’: customer_ids,
‘CreditScore’: credit_scores,
‘Age’: ages,
‘Income’: income,
‘LoanAmount’: loan_amounts,
‘LoanDurationMonths’: loan_durations,
‘LoanPurpose’: loan_purposes,
‘EmploymentStatus’: employment,
‘DefaultStatus’: default_status
})
# Display first few rows of the dataset
print(data.head())
# Save DataFrame to a CSV file
data.to_csv(‘credit_analysis_dataset_with_missing_outliers.csv’, index=False)
Columns(information) in Dataset:
➢ CustomerID: This column represents a unique identifier for each customer. It’s typically used to track individual customers within the dataset.
➢ CreditScore: This column represents the credit score of each customer. Credit scores are numerical representations of an individual’s creditworthiness, often used by lenders to assess the risk of lending money to a borrower. Higher credit scores indicate lower credit risk.
➢ Age: This column represents the age of each customer. Age can be an important factor in credit analysis as it may correlate with financial stability and responsibility.
➢ Income: This column represents the income of each customer. Income is a key factor in determining creditworthiness, as it affects an individual’s ability to repay loans.
➢ LoanAmount: This column represents the amount of the loan that each customer has applied for or obtained. It indicates the sum of money borrowed from a lender.
➢ LoanDurationMonths: This column represents the duration of the loan in months. It indicates the length of time over which the loan is expected to be repaid.
➢ LoanPurpose: This column represents the purpose for which the loan is taken. It could include categories such as personal loans, car loans, home loans, or education loans.
➢ EmploymentStatus: This column represents the employment status of each customer. It indicates whether the customer is employed, unemployed, or self employed. Employment status is important in assessing a borrower’s ability to repay a loan.
➢ DefaultStatus: This column represents whether the customer has defaulted on a loan. It’s a binary column where “True” indicates that the customer has defaulted, and “False” indicates that the customer has not defaulted. Default status is a critical factor in credit analysis as it reflects the risk associated with lending to a particular customer.
Task:
1. Data Understanding:
a. Describe the key features of the credit analysis dataset generated using the provided Python code.
b. What are the dimensions of the dataset? How many records does it contain?
c. Discuss the significance of each column in the dataset and how it contributes to the credit analysis process.
d. Are there any missing values or outliers in the dataset? If so, how do you plan to handle them before proceeding with data modeling and analysis?
2. Data Modeling and Analysis:
a. Explain the process of building data model(s) to fit the credit analysis dataset. Which techniques or algorithms did you employ for modeling? b. What metrics or criteria did you use to evaluate the performance of your data model(s)?
c. Provide insights into the patterns or trends observed during data analysis. How do these insights contribute to understanding customer behavior and credit risk? d. Discuss any challenges or limitations encountered during the modeling and analysis phase and how you addressed them.
Page | 20
Victorian Institute of Technology CRICOS Provider No. 02044E, RTO No: 20829
3. Validation and Interpretation:
a. Describe the methods used to validate the results obtained from data modeling and analysis.
b. How do you interpret the outcomes of your analysis in the context of credit risk assessment?
c. Discuss the reliability and robustness of the insights derived from the analysis.
4. Insights and Recommendations:
a. Based on your analysis, what insights can be drawn regarding customer creditworthiness and risk management?
b. Provide recommendations for improving the credit assessment process or mitigating credit risk based on your findings.
c. How do these insights and recommendations align with the objectives of the credit analysis project?
5. Ethical and Social Considerations:
a. Identify and discuss any ethical or social issues related to the collection, usage, and analysis of the credit analysis dataset.
b. How did your team address these ethical and social considerations throughout the project?
c. What measures were implemented to ensure fairness, transparency, and accountability in the analysis and decision-making process?
6. Oral Presentation:
➢ Prepare a concise oral presentation to present your findings to the class.
➢ Highlight key insights, trends, and interesting observations discovered during the analysis.
➢ Use visual aids such as slides or interactive dashboards to enhance the presentation.
Submission Guidelines:
➢ The Analysis report of 1500 words must be submitted digitally, either in PDF or Word document format. The report should include an appendix at the end containing screenshots of the Python code along with its corresponding output
➢ The oral presentation can be delivered using presentation software (e.g., PowerPoint, Google Slides).
➢ Ensure proper citation and referencing for any external sources or datasets used.
➢ Please submit two files, the Report and the Oral Presentation, through the link provided in the LMS before the specified deadline.
Note: Collaboration within the group is encouraged, but each group member must contribute substantially to the analysis, report writing, and presentation. Plagiarism or unauthorized use of external sources will result in penalties.
Marking Criteria/Rubric
You will be assessed on the following marking criteria/Rubric:
Total Marks: 40
Assessment criteria |
Professional (80%-100%) Very Good (70%-79%) |
Good (60%-69%) |
Satisfactory (50%- 59%) |
Unsatisfactory (0%- 49%) |
|
Data Understanding |
Comprehensive description of dataset features, dimensions, significance of each column, and clear plan to handle missing values and outliers. |
Good description of dataset features, dimensions, significance of each column, with some plan to handle missing values and outliers. |
Adequate description of dataset features, dimensions, significance of each column, with limited plan to handle missing values and outliers. |
Basic description of dataset features and dimensions, lacking in-depth discussion on significance of each column and plan for handling missing values and outliers. |
Inadequate description of dataset features and dimensions, with no clear plan for handling missing values and outliers. |
Data Modeling and Analysis |
Detailed explanation of the process of building data model(s), techniques/algorithms employed, metrics/criteria for model evaluation, insights into patterns/trends, and discussion of challenges/limitations. |
Explanation of the process of building data model(s), techniques/algorithms employed, metrics/criteria for model evaluation, insights into patterns/trends, and some discussion of challenges/limitations. |
Explanation of the process of building data model(s), techniques/algorithms employed, metrics/criteria for model evaluation, and basic insights into patterns/trends observed. |
Basic explanation of the process of building data model(s), techniques/algorith ms employed, and limited discussion on metrics/criteria for model evaluation and insights into patterns/trends. |
Inadequate explanation of the process of building data model(s), techniques/algorith ms employed, and no discussion of metrics/criteria for model evaluation and insights into patterns/trends. |
Validation and Interpretation |
Clear description of validation methods used, interpretation of analysis outcomes in the context of credit risk assessment, and |
Description of validation methods used, interpretation of analysis outcomes in the context of credit risk assessment, and discussion of |
Description of validation methods used and interpretation of analysis outcomes in the context of credit risk assessment. |
Basic description of validation methods used and limited interpretation of analysis outcomes in the context of credit |
Inadequate description of validation methods used and no interpretation of analysis outcomes in |
discussion of reliability/robustness of insights. |
reliability/robustness of insights. |
risk assessment. |
the context of credit risk assessment. |
||
Insights and Recommendations |
Comprehensive insights drawn regarding customer creditworthiness and risk management, detailed recommendations for improving credit assessment process or mitigating credit risk, and alignment of insights/recommendatio ns with project objectives. |
Insights drawn regarding customer creditworthiness and risk management, recommendations for improving credit assessment process or mitigating credit risk, and alignment of insights/recommen dations with project objectives. |
Basic insights drawn regarding customer creditworthiness and risk management, recommendations for improving credit assessment process or mitigating credit risk, and some alignment with project objectives. |
Limited insights drawn regarding customer creditworthiness and risk management, recommendations for improving credit assessment process or mitigating credit risk, and limited alignment with project objectives. |
Inadequate insights drawn regarding customer creditworthiness and risk management, recommendations for improving credit assessment process or mitigating credit risk, and no alignment with project objectives. |
Ethical and Social Considerations |
Identification and discussion of ethical or social issues related to data collection, usage, and analysis, how team addressed these considerations, and measures implemented for fairness, transparency, and accountability. |
Identification and discussion of ethical or social issues related to data collection, usage, and analysis, some discussion on how team addressed these considerations, and some measures implemented for fairness, transparency, and accountability. |
Identification and discussion of ethical or social issues related to data collection, usage, and analysis, and limited discussion on how team addressed these considerations and measures implemented for fairness, transparency, and accountability. |
Basic identification and discussion of ethical or social issues related to data collection, usage, and analysis, and limited discussion on how team addressed these considerations and measures implemented for fairness, transparency, and accountability. |
Inadequate identification and discussion of ethical or social issues related to data collection, usage, and analysis, and no discussion on how team addressed these considerations and measures implemented for fairness, transparency, and accountability. |
Oral Presentation |
Concise oral presentation with clear highlighting of key insights, trends, and observations discovered during analysis, effective use of visual aids to enhance presentation. |
Oral presentation with highlighting of key insights, trends, and observations discovered during analysis, and use of visual aids to enhance presentation. |
Oral presentation with some highlighting of key insights, trends, and observations discovered during analysis, and limited use of visual aids. |
Basic oral presentation with limited highlighting of key insights, trends, and observations discovered during analysis, and minimal use of visual aids. |
Inadequate oral presentation with no highlighting of key insights, trends, and observations discovered during analysis, and no use of visual aids. |
Leave A Comment