The Research Mining Technology

Saturday, 3 May 2014

Univariate Outlier Detection Based On Normal Distribution

Detection of Univariate Outlier Based On Normal Distribution
Data involving only one attribute or variable are called univariate data. For simplicity, we often choose to assume that data are generated from a normal distribution. We can then learn the parameters of the normal distribution from the input data, and identify the points with low probability as outliers.
Let’s start with univariate data. We will try to detect outliers by assuming the data follow a normal distribution.
Univariate outlier detection using maximum likelihood: 
Suppose a city’s average temperature values in July in the last 10 years are, in value-ascending order, 24.0°C, 28.9°C, 28.9°C, 29.0°C, 29.1°C, 29.1°C, 29.2°C, 29.2°C, 29.3°C and 29.4°C. Let’s assume that the average temperature follows a normal distribution, which is determined by two parameters: the mean, μ, and the standard deviation, σ.
We can use the maximum likelihood method to estimate the parameter μ and σ. That is, we maximize the log-likelihood function

Where n is the total number of samples, which is 10 in this sample.
Taking derivatives with respect to μ and σ2 and solving the result system of first order conditions leads to the following maximum likelihood estimates:

In this example, we have


Accordingly, we have .

The most dividing value, 24.0ºC, is 4.61ºC away from the estimated mean. We know that the region contains 99.7% data under the assumption of normal distribution. Because the probability that the value 24.0ºC is generated by the normal distribution is less than 0.15%, and thus can be identified as an outlier.
Share:

Thursday, 1 May 2014

Mahalanobis Distance using R code

Mahalanobis distance is one of the standardized distance measure in statistics. It is a unit less distance measure introduced by P. C. Mahalanobis in 1936. Here i have using R code and one example for multivariate data sets to find the Mahalanobis distance.
Mahalanobis Distance Formula: ${{D}^{2}}=(x-\mu {)}'\sum{^{-1}}(x-\mu )$
where,
x - Number of observations        
μ - Mean
Σ - Covariance Matrix
Now we go to example program.
First Step:
Using R software and open new script.
Second step:
Import your data set (if your data format xls change to Save As  csv format because csv format files are separated by comma, this only for appropriate for r data input type, (its my suggestion only otherwise use any format) )
Now import data using below code
> Input name <- read.csv(file="C:/filename.csv",head=TRUE,sep=",")
(I have save my data files in to C:/ directory so i have using above code, if you have another directory to copy the file path with filename.csv )
> Input name

Example: 
Here i have using tobacco data sets for test purpose.

> tobacco <- read.csv(file="C:/tobacco.csv",head=TRUE,sep=",")
> tobacco
   BurnRate PercentSugar PercentNicotine
1       1.55        20.05            1.38
2       1.63        12.58            2.64
3       1.66        18.56            1.56
4       1.52        18.56            2.22
5       1.70        14.02            2.85
6       1.68        15.64            1.24
7       1.78        14.52            2.86
8       1.57        18.52            2.18
9       1.60        17.84            1.65
10     1.52        13.38            3.28
11     1.68        17.55            1.56
12     1.74        17.97            2.00
13     1.93        14.66            2.88
14     1.77        17.31            1.36
15     1.94        14.32            2.66
16     1.83        15.05            2.43
17     2.09        15.47            2.42
18     1.72        16.85            2.16
19     1.49        17.42            2.12
20     1.52        18.55            1.87
21     1.64        18.74            2.10
22     1.40        14.79            2.21
23     1.78        18.86            2.00
24     1.93        15.62            2.26
25     1.53        18.56            2.14
> mean<-colMeans(tobacco)
> mean
      
   BurnRate    PercentSugar PercentNicotine
         1.6880         16.6156          2.1612
> cm<-cov(tobacco)
> cm
                            BurnRate    PercentSugar  PercentNicotine
BurnRate            0.02787500   -0.1098050      0.01886083
PercentSugar    -0.10980500    4.2276840     -0.75646533
PercentNicotine  0.01886083   -0.7564653      0.27466933
> D2<-mahalanobis(tobacco,mean,cm)
> D2
 
[1] 3.08827463 5.35466197 1.37251420 2.61209613 2.07211223 8.90626020
 [7] 1.85354309 1.96263411 1.10087851 7.04624993 1.56621848 0.78813845
[13] 3.37468305 3.77347055 2.78904427 0.99063959 5.87881205 0.08359811
[19] 1.47435780 1.45810005 1.80081271 5.88148893 2.52555955 2.13920930
[25] 2.10664213
>
 Now you can get the Mahalanobis distance values for further analysis that's all. 

Share:

Monday, 3 March 2014

Research Methodology Paper-1 Syllabus for Statistics

Unit-I

Concept of Research – Importance of Research – Ethics in Research – Selection of Research Topics and Problems – Research in Statistics – Literature Survey and its Importance

Unit-II

Preparation of Assignments, Theses and reports – Significance of Publications in
Research – Journals in Statistics

Unit-III

Introduction to stochastic processes – Classification of stochastic processes according to state space and time domain countable state Markov chains – Chapman -
Kolmogrov’s equations - calculation of n-step transition probability and its limit. Stationary distribution. Classification of states – weakly stationary process and Gaussian process.

Unit-IV

Time series – Auto covariance and auto correlation functions and their properties – Detailed study of the stationary process – Moving average – Autoregressive – Auto regressive moving Average – Autoregressive integrated moving average. Box – Jenkins models.

Unit-V

Simulation: Concept and Advantages of Simulation – Event – type Simulation – Generation of Random Numbers using Uniform, Exponential, Gamma and Normal Random Variables – Mante-Carlo Simulation Tecnique – Algorithms.

Reference:

1. Anderson J. (1977), Thesis and Assignment Writing, Wiley Eastern Limited, New Delhi.

2. Box G.E.P. and Jenkins G.M. (1976): Time series analysis – forecasting and control, Holden-Day, San Francisco.

3. Cox, D.R. and A.D. Miller: The Theory of Stochastic Processes, Methuen, London

4. Kanti Swarup, Gupta, P.K., and Man Mohan (2008), Operations Research, Sultan Chand & Sons Publications, New Dlhi.

5. Kothari, C.K. (2006), Research Methodology, Prentice-Hall of India (P) Limited, New Delhi.

6. MLA Handbook for writers of research papers, Modern Language Association, Newyork.
Share:

Saturday, 19 October 2013

K-means clustering

K-means Is one of the simplest unsupervised learning algorithms that solve the well known clustering problem. The k-means algorithm takes the inputs parameter, k, and partitions a set of n objects into k cluster so that the resulting intracluster similarity is high but the intercluster similarity is low. Cluster similarity is measured in regard to the mean value of the objects in a cluster, which can be viewed as the cluster’s centroid or center of gravity.
Given K, the k-means algorithm is implemented in four steps:
  • Partition objects into k nonempty subsets
  • Compute seed points as the centroids of the clusters of the current partition(the centroid is the center, i.e., mean point, of the cluster)
  • Assign each object to the cluster with the nearest seed point
  • Go back to step 2, stop when no more new assignment 
Finally this algorithm aims at minimizing an objective function, in this case a mean squared error function is calculated as:
\[E=\sum\limits_{i=0}^{k}{{{\sum }_{p\in {{c}_{i}}}}\left| p-{{m}_{i}} \right|}\]
where E is the sum of the square error for all objects in the data set; p is the point in space representing a given object; and mi is the mean of cluster Ci (both p and mi are multidimensional).
Algorithm: k-means.
The k-means algorithm for partitioning, where each cluster’s center is represented by the mean value of the objects in the cluster.
Input:
K: the number of clusters,  
D: a data set containing n objects.
Output: A set of k cluster
Advantages: with a large number of variables, k-means may be computationally faster than hierarchical clustering (if k is small). K-means may produce tighter cluster than hierarchical clustering, especially if the cluster are globular.
Disadvantages:
·         Difficult in comparing the quality of the clusters produced.
·         Applicable only when mean is defined.
·         Need to specify k, the number of clusters, in advance.
·         Unable to handle noisy data and outliers.  
·         Not suitable to discover clusters with non-convex shapes. 
Share:

Research Methodology - Objectives and Motivation of research

Research is common parlance refers to a research for knowledge. Once can also define research as a scientific and systematic search for pertinent information on a specific topic. In fact, research is an art of scientific investigation. The advanced Learner’s Dictionary of current English lays down the meaning of research as a “careful investigation or inquiry specially through search for new facts in any branch of knowledge”. Redman and Mory define research as a “Systematized efforts to gain new knowledge” some people considered research as a movement, a movement from the known to the unknown. It is actually a voyage of discovery. We all possess the vital instinct of inquisitiveness for when the unknown conforms us we wonder and our inquisitiveness make us probe and attain full and fuller understanding of the unknown. This inquisitiveness is the mother of all knowledge and the mother which man employs for obtaining the knowledge of whatever the unknown, can be termed as research.
Research is an academic activity and as such the term should be used in a technical sense. According to Clifford Woody research comprises defining and redefining problems, formulating hypothesis or suggested solutions; collecting, organizing and evaluating data; making deductions and reaching conclusions; and at last carefully testing the conclusions to determine whether they fit the formulating hypothesis.
Objectives of Research:
The purpose of research is to discover answers to questions through the application of scientific procedures. The main aim of research is to find out the truth which is hidden and which has not been discovered as yet. Though each research study has its own specific purpose, we may think of research objectives as falling into a number of following broad groupings:
1. To gain familiarity with a phenomenon or to achieve new insights into it
2. To portray accurately the characteristics of a particular individual, situation or a group
3. To determine the frequency with which something occurs or with which it is associated with something else
4. To test a hypothesis of a causal relationship between variables.
Motivation in Research
What makes people to undertake research? This is a question of fundamental importance. The possible motives for doing research may be either one or more of the following:
1. Desire to get a research degree along with its consequential benefits;
2. Desire to face the challenge in solving the unsolved problems, i.e., concern over practical problems initiates research;
3. Desire to get intellectual joy of doing some creative work;
4. Desire to be of service to society;
5. Desire to get respectability.
However, this is not an exhaustive list of factors motivating people to undertake research studies. Many more factors such as directives of government, employment conditions, curiosity about new things, desire to understand causal relationships, social thinking and awakening, and the like may as well motivate (or at times compel) people to perform research operations.

Reference:


Kothari, C.K. (2006), Research Methodology, Prentice-Hall of India (P) Limited, New Delhi.
 


Share:

Wednesday, 19 December 2012

Artificial Neural Networks

What are Neural Networks?
  •  Models of the brain and nervous system
  • Highly parallel 
               Process information much more like the brain than a serial computer
  • Learning
  • Very simple principles
  • Very complex behaviors
  • Applications
1.      As powerful problem solvers
2.      As biological models
Biological Neural Nets
Pigeons as art experts (Watanabe et al. 1995)
      Experiment:
  • Pigeon in Skinner box
 
  • Present paintings of two different artists (e.g. Chagall / Van Gogh)
 
  • Reward for pecking when presented a particular artist (e.g. Van Gogh)
  • Pigeons were able to discriminate between Van Gogh and Chagall with 95% accuracy (when presented with pictures they had been trained on)
  • Discrimination still 85% successful for previously unseen paintings of the artists
  • Pigeons do not simply memorise the pictures
  • They can extract and recognise patterns (the ‘style’)
  • They generalise from the already seen to make predictions
  • This is what neural networks (biological and artificial) are good at (unlike conventional computer)
 

Share:

Saturday, 17 November 2012

R code for Wilcoxon rank sum test



Example 1 (R-Code Script)
     Two samples of Young walleye were drawn from two different lakes and the fish were weighed. The data in g are:
R-Code and Results:
> X.1<-c(253,218,292,280,276,275)
> X.2<-c(216,291,256,270,277,285)
> sample<-c(rep(1,6),rep(2,6))
> w<-data.frame(c(X.1,X.2),sample)
> names(w)[1]<-'weight(g)'
> cbind(w[1:6,],w[7:12,])
  weight(g) sample weight(g) sample
1       253      1       216      2
2       218      1       291      2
3       292      1       256      2
4       280      1       270      2
5       276      1       277      2
6       275      1       285      2
> idx<-sort(w[,1],index.return=TRUE)
> d<-rbind(weight=w[idx$ix,1],sample=w[idx$ix,2],
+ rank=1:12)
> dimnames(d)[[2]]<-rep('',12);d
                                                      
weight 216 218 253 256 270 275 276 277 280 285 291 292
sample   2   1   1   2   2   1   1   2   1   2   2   1
rank     1   2   3   4   5   6   7   8   9  10  11  12
> rank.sum<-c(sum(d[3,d[2,]==1]),
+ sum(d[3,d[2,]==2]))
> rank.sum<-rbind(sample=c(1,2),
+ 'rank sum'=rank.sum)
> dimnames(rank.sum)[[2]]<-c('','');rank.sum
             
sample    1  2
rank sum 39 39
> wilcox.test(X.1,X.2)

        Wilcoxon rank sum test

data:  X.1 and X.2
W = 18, p-value = 1
alternative hypothesis: true location shift is not equal to 0
>
Share:

Comment

BTemplates.com

Search This Blog

Powered by Blogger.

Translate

About Me

My photo
Tirunelveli, Tamil Nadu, India

Featured post

Mahalanobis Distance using R code

Mahalanobis distance is one of the standardized distance measure in statistics. It is a unit less distance measure introduced by P. C. Mah...

Weekly