Blog Archive

Theme images by Storman. Powered by Blogger.


Monday, August 22, 2016

Most Important Data Analyst Interview Questions with Answers.

Most Important Data Analyst Interview Questions with Answers. Latest Data Analyst Interview Questions and Answers. Top 15 Frequently Asked Data Analyst Interview Questions And Answers.


1) Mention what is the responsibility of a Data analyst?

Responsibility of a Data analyst include,

Provide support to all data analysis and coordinate with customers and staffs
Resolve business associated issues for clients and performing audit on data
Analyze results and interpret data using statistical techniques and provide ongoing reports
Prioritize business needs and work closely with management and information needs
Identify new process or areas for improvement opportunities
Analyze, identify and interpret trends or patterns in complex data sets
Acquire data from primary or secondary data sources and maintain databases/data systems
Filter and “clean” data, and review computer reports
Determine performance indicators to locate and correct code problems
Securing database by developing access system by determining user level of access

2) What is required to become a data analyst?

To become a data analyst,

Robust knowledge on reporting packages (Business Objects), programming language (XML, Javascript, or ETL frameworks), databases (SQL, SQLite, etc.)
Strong skills with the ability to analyze, organize, collect and disseminate big data with accuracy
Technical knowledge in database design, data models, data mining and segmentation techniques
Strong knowledge on statistical packages for analyzing large datasets (SAS, Excel, SPSS, etc.)

3) Mention what are the various steps in an analytics project?

Various steps in an analytics project include

Problem definition
Data exploration
Data preparation
Modelling
Validation of data
Implementation and tracking

4) Mention what is data cleansing?

Data cleaning also referred as data cleansing, deals with identifying and removing errors and inconsistencies from data in order to enhance the quality of data.

5) List out some of the best practices for data cleaning?

Some of the best practices for data cleaning includes,

Sort data by different attributes
For large datasets cleanse it stepwise and improve the data with each step until you achieve a good data quality
For large datasets, break them into small data. Working with less data will increase your iteration speed
To handle common cleansing task create a set of utility functions/tools/scripts. It might include, remapping values based on a CSV file or SQL database or, regex search-and-replace, blanking out all values that don’t match a regex
If you have an issue with data cleanliness, arrange them by estimated frequency and attack the most common problems
Analyze the summary statistics for each column ( standard deviation, mean, number of missing values,)
Keep track of every date cleaning operation, so you can alter changes or remove operations if required

6) Explain what is logistic regression?

Logistic regression is a statistical method for examining a dataset in which there are one or more independent variables that defines an outcome.

7) List of some best tools that can be useful for data-analysis?

Tableau
RapidMiner
OpenRefine
KNIME
Google Search Operators
Solver
NodeXL
io
Wolfram Alpha’s
Google Fusion tables

8) Mention what is the difference between data mining and data profiling?

The difference between data mining and data profiling is that

Data profiling: It targets on the instance analysis of individual attributes. It gives information on various attributes like value range, discrete value and their frequency, occurrence of null values, data type, length, etc.

Data mining: It focuses on cluster analysis, detection of unusual records, dependencies, sequence discovery, relation holding between several attributes, etc.

9) List out some common problems faced by data analyst?

Some of the common problems faced by data analyst are

Common misspelling
Duplicate entries
Missing values
Illegal values
Varying value representations
Identifying overlapping data

10) Mention the name of the programming framework developed by Google for processing large data set for an application in a distributed computing environment?

Hive is the programming framework developed by Google for processing large data set for an application in a distributed computing environment.

11) Mention what are the missing patterns that are generally observed?

The missing patterns that are generally observed are

Missing completely at random
Missing at random
Missing that depends on the missing value itself
Missing that depends on unobserved input variable

12) Explain what is KNN imputation method?

In KNN imputation, the missing attribute values are imputed by using the attributes value that are most similar to the attribute whose values are missing. By using a distance function, the similarity of two attributes is determined.

13) Mention what are the data validation methods used by data analyst?

Usually, methods used by data analyst for data validation are

Data screening
Data verification

14) Explain what should be done with suspected or missing data?

Prepare a validation report that gives information of all suspected data. It should give information like validation criteria that it failed and the date and time of occurrence
Experience personnel should examine the suspicious data to determine their acceptability
Invalid data should be assigned and replaced with a validation code
To work on missing data use the best analysis strategy like deletion method, single imputation methods, model based methods, etc.

15) Mention how to deal the multi-source problems?

To deal the multi-source problems,

Restructuring of schemas to accomplish a schema integration
Identify similar records and merge them into single record containing all relevant attributes without redundancy

0 on: "Most Important Data Analyst Interview Questions with Answers. "