Tuesday, 28 November 2023

Deadlock Recovery

 deadlock recovery techniques like wound-wait and wait-die are used to handle situations where processes are waiting for resources, and a deadlock might occur. Let's break them down:


Wound-Wait:


Think of it like being a bit impatient. If a process needs a resource held by another process, it checks the age or "wound" of the other process.

If the other process is "younger" (started later), then the current process waits.

If the other process is "older" (started earlier), the current process can "wound" it, meaning it can force the older process to release its resources, preventing a deadlock.

Wait-Die:


This approach is more patient. If a process needs a resource held by another process, it checks the age or "age difference" between them.

If the other process is "younger," the current process waits.

If the other process is "older," the current process can "die," meaning it gives up and gets restarted, allowing the older process to continue and preventing a deadlock.

In both cases, the idea is to avoid deadlocks by carefully managing how processes interact. 




Sunday, 26 November 2023

Factor analysis, Dimensionality reduction, Predictive analytics, Cluster Analysis, Decision Tree, Types of Decision Trees notes

Factor analysis: in factor analysis, latent variables are turned into factors i.e, they are reduced according to their functionality

Example waiting time, cleanliness, healthy and taste are latent variables

waiting time, cleanliness are service factors

healthy and taste are food experience factor

In a hotel or restaurant these latent variables are reduced to factors.

Descriptive analysis is very important in the sense of model building

Merits of factor analysis:
1. reduces amount of data
2. easy categorization
3. uses statistical methods
4. very important in model building
5. data is very easy to interpret as there are no outliers
6. is important descriptive analysis technique

Predictive analytics: is when we retrieve the unknown or future data with the help of past/current data
There are 4 types of predictive analysis
    1. classification
    2. Prediction
    3. Regression
    4. Time series
These types are used in different situations accordingly

 Dimensionality reduction: when there is data with high dimensionality working with it becomes difficult and tedious task.

to remove this we perform data reduction step in KDD process

Dimensionality reduction is a part of data reduction

Some dimensionality reduction:
1. Data cube: data is arranged in cube form to ensure there are less dimensions
2. Numerosity: we can apply rank to groups of data 
3. Feature selection: we need to find appropriate features using filtering, morphing etc
4. Normalization: to make all the data in same range or interval

Lazy Learning:  Lazy learning is a technique where we give a model some rules and testing criteria. But it only executes them when the data is given to them or from their neighbours

if we give data suppose a=2 and b=3 then only it learns from the formula otherwise it wont

it is useful when we have TB of data and we cant test all of it.

kNN is a lazy learning algorithm

Cluster Analysis
Cluster analysis is a kind of descriptive analysis where we group or cluster similar kind of data.
it is helpful for large amounts of unstructured data
it is used to further analysis and building models

Although there are different types of cluster analysis but the most popular ones are
1. partitioning clusters
2. hierarchical 
3. Density Based
4. Grid Based
1. Partitioning Methods: as the name suggest partitioning methods involves partitioning a data set and performing some analysis or methods or formulas to derive the cluster.
K- means and K- medoids are Partitioning methods

2. Hierarchical Methods: As the name suggests we need to form a hierarchy and then we can form one cluster. 
Agglomerative and Divisive are popular hierarchical methods

3. Density Methods: When we assign some density to the data points and combine them by their density. DBScan  is one of the popular density based clustering technique

4. Grid Based: in this method, we assign the density to the cluster and then store them in the form of a rectangular grid with cells
Then we apply decision tree and apply levels
Higher level cells are stored in one cluster and lower level in one

In cluster analysis, we need to choose the appropriate cluster technique for the datasets

Decision Tree: Here in this below example, 
weather is root node
humidity and speed are attributes/ columns
yes or no are class labels
This example shows if a child can play outside or not.

In decision tree, rectangle represents attributes/ columns
ellipse represents class labels

The main purpose of decision tree is to extract the rules for classification.

Example: if weather = sunny and humidity = normal then play = yes

if  weather = cloudy then play = yes

id weather = windy and speed = low then play = yes


Types of Decision Trees
1) un weighted decision tree: when there is no weight on any nodes of the decision tree, i.e, there are no biases in decision tree

2. weighted decision tree:

3. binary decision tree: where there are only two attributes or labels in a tree

4. Random forest: n number of decision trees combined

Functions and Files in python

Function is used to run a block of code

Syntax:
def function_name(arguments):
        statements
        return

def - declaration of the function
function_name: it refers to the name of the function
arguments: values passed to the function
statement - block of code to run program
return - it is used to return the value. It is sometimes used and sometimes not used

Note: once return is executed, it is end of the function

Function call: We can call the function by using function name. once the function is called the execution takes place to the definition of the function.

Syntax: def greet()

def greet():
    statements

#calling the function
greet()

Example: 
def add_numbers(a=8, b=6):
        sum = a+b
        print(sum)

#call the function
add_numbers(2,3)

Ouput:
5

Files in Python
There are different operations on file
1. creating a file:          fp = open('file.txt', 'w')  or  
fp = open('file.txt', 'a')
2. opening a file :         
fp = open('file.txt', 'w')  or  fp = open('file.txt', 'a') or fp = open('file.txt', 'r')
3. reading                     
fp = open('file.txt', 'r')
4. writing to file           
fp = open('file.txt', 'w')
5. closing the file          fp.close()
6. appending to a file    
fp = open('file.txt', 'a')


Big Data Analytics notes

 Big data is large amount of complex data which is increasing rapidly. This data can be of any type and this data is inconsistent in nature. This data cannot be processed through traditional methods.

Characteristics of Big data

There are four main characteristics of big data

1) volume: big data can be very large

2) variety: big data can be of any data type

3) veracity: big data can be inconsistent at times

4) velocity: the speed at which big data is generated is fast

Importance of big data

in todays generation with 8 billion population, big data plays an important role.

1) In business, big data can generate meaningful insights through big data tools to increase customer satisfaction.

Ex: if there is an company, it can use big data to analyze customer behavior and recommend relevant products.

2) Big data also plays an important role in science

large number of test can be analyzed very effectively using big data

3)Personal use: big data can also be useful to us

Ex: like in Spotify , we have each persons yearly recap, where it shows what kind of music, we listened to in that year.

Applications of Big Data

Big data is used and applied in almost every sector now - a- days.

This can be made possible through tools like hadoop.

Some of the applications of big data are

1) In hospitals, to manage large amount of patients

2) in business, to bring out meaningful insights

3) in education institutions to evaluate student performance

4) in social media 

5) search engine

6) online shopping

7) in scientific research


Data sources: are those sources from which we can acquire big data

internal data source: these sources use sensor technologies and are commonly used in organizations i.e, these sources collect data from their devices, collects data like audio, video, temperature

Ex: mobile phones, IoT devices

third party data sources: where there is any small business that do not have enough money or inventory to have internal data sources, they go for third party data sourcing i.e, they source data using third party technology.

It is commonly seen in small web pages

It collects the data like no. of clicks, opens

Example: google analytics

External data source: these data sources are collected by different source and they are open to be accessed by anyone Ex: social media

Open data sources: these are similar to external sources but open data sources are very complex and are not relevant for us. These can be scientific data or research data or government data 

Ex: www.govt.uk

Through these sources we can acquire big data.

Sturctured vs unstructured
Structured : These are in tabular format.
Can be interpreted through machines
easy to analyze and can done through both machines and humands
can use tools like SQL, Oracle

Unstructured: These are in video, audio, image format
can be difficult to interpret through machine
difficult to analyze and can be done through only humans
tools like noSQL, hadoop

PIG architecture
here are 4 main parts of pig

1) parser: it performs semantic checks and checks the syntax in pig scripts. After checking, it converts the pig script to DAG and logical operation format. The parser sends this DAG file to optimizer.

2) optimizer: it takes the DAG input from the parser and applies some functions like projection and push down to delete unnecessary columns. It also optimizes the logical plan of the script. After that it send this optimized DAG file to compiler.

3) compiler: Here, compiler takes optimized DAG file and compiles it. The output of compiler gives a series of map reduce jobs as multi- querying is available in pig compiler. Pig compiler can rearrange the order to execute efficiently

4) execution engine: it takes the final compiled map reduce tasks and executes it.

The other components are 
i) Grunt shell: it is like command line interface like pig
ii) Apache pig: where all the libraries are stored
iii) map reduce: where mapping and reducing is done
iv) finally HDFS: where map reduce output is stored.

Hive Architecture 
process of hive architecture is similar to apache pig

hive server: hive server takes all the requests from the drivers and serves and sends them to hive driver.
hive driver: hive driver compiles and optimizes the queries that are in DAG format outputs and sends them to execution engine as map reduce task
execution engine: it executes all the map reduce jobs
meta store: it stores all the information about the data present in Hive. It stores meta data about the columns and its information. it serializes and desterilizes data
CLI: Hive command line interface
Hive web UI: It is GUI commonly provided online

Hive Client:
1)Trift  Server: it connects all programming languages that support thrift to HIVE
2)JDBC driver: as hive is built on top of map reduce and use java. it connects to java for application purposes
3) ODBC driver: application that connects to HIVE which supports ODBC