StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Generalized Framework For Mining Web Content Outliers - Research Paper Example

Cite this document
Summary
During the last two decade the wealth of data over the World Wide Web has increased exceedingly. This data includes text, images, videos, audios, maps etc. Such a massive amount of data with different content types attracted billion of users…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER96.9% of users find it useful
Generalized Framework For Mining Web Content Outliers
Read Text Preview

Extract of sample "Generalized Framework For Mining Web Content Outliers"

? A Generalized Framework for Web Mining Outliers Table of Content 2. Introduction 3. Introduction to Web mining 4. Web Mining Outliers 4.1 Web structural topological mining 4.2 Web usage mining 4.3 Generalized framework for mining web content Outliers 5. Information retrieval 5.1 Web Crawling 5.2 Text Mining 5.3 Image Mining, Video Mining 5.4 Curse of Dimensionality 5.5 Machine learning 5.6.1 Supervised Learning 5.6.2 Unsupervised Learning 6. Other Approaches for the Identification of Web Outliers 7. Conclusion 8. References 1. ABSTRACT Web mining is the process of extraction of information from the colossal amount of data available online. This information is mostly based on search queries provided by the users. The search engine generates most relevant Web content with respect to the searched query. With the advent of new and constantly evolving search requirements most of which are intended for use in planning business strategies. Identification of outliers or anomalies has become a challenging task for online data miners. Online data mining is rapidly growing into a promising as well as profitable industry, and since identification of outliers is a vital part of data mining, its importance is growing respectively as well. Web mining can be categorized into three categories I.e. Web content mining, Web structural topology mining and Web usage mining. The outliers in all the three Web mining categories have their own significance and will be generated by different processes. The paper discusses the significance and methods for the identification of outliers from the three categories. For web content mining a generalized framework has been presented for the identification of outliers in the text content over Web. 2. INTRODUCTION During the last two decade the wealth of data over the World Wide Web has increased exceedingly. This data includes text, images, videos, audios, maps etc. Such a massive amount of data with different content types attracted billion of users within the last few years. With the advent of social networking web sites, micro blogging as well as an increase in usage of the net over mobile phones added an avalanche of data over the web, which varied in context and attracted all sorts of interest groups. This constant increase in information has created a great need of fast, relevant and mature content search methods that can sift through information, understand and generate search results in the shortest possible time. This requirement resulted in the development of revolutionary search engines like Bing, Google, etc. These search engines not only perform rapid searches over provided query from the user, however, they also maintain huge repositories of data classified into specific categories. There are several challenges that exist within such classification processes, and which can cause imprecise search results if they are not dealt with properly. Several methods have been proposed to match the most relevant web content with the user's query, however, this is a matter of debate that whether search engines should only generate the directly matched results or also provide some relevant results? Both methods have their own significance and a mixed kind of approach is in practice by different search engines. Google's page rank method got great popularity and has been modified by others for different purposes, for example the importance or relevance of a page is not measured for the top query results generation however several other kinds of processes are associated with this page ranking method. For instance, the computational advertisement industry has grown to a $20 billion industry this year and is expected to go further and big search engine groups are leaders in this industry so far. Advertisements are supposed to be placed on relevant pages so that they are viewed by the relevant i.e. targeted customers, which not only increases the revenue for their customers in terms of return of interest, however, moves towards personalized or behavioral targeting. Out of several billion web pages containing videos, images or text, independent of the nature of the search the content it is always a challenging question that what is required and which content has to be discarded. This association, or identification of relevant data and excluding the non relevant data involves several challenges including improved text, video or image mining techniques, precise machine learning methods (supervised & unsupervised), and finally deciding outliers with respect to the nature of decision function. This paper considers the Web mining process in three different categories i.e. Web content mining, Web usage mining, and Web structural topological mining. The process of mining outliers out of the three categories needs separate methodologies. The paper discusses the significance and methods for identifying outliers for the three categories in the next two different sections. Section 4 discusses in detail about the methods of identifying outliers in general. A framework is presented in section 4.3 about the method for mining outliers from web content. The framework comprises upon several modules including web crawler, text mining, dimension reduction, and machine learning approaches for classification. Each module will be discussed in detail with implementation challenges and open problems. The framework allows the development of a complete system from the data extraction stage to identification of outliers. The framework is intended to mine outliers by locating closely related datasets and then marking the rest as the outliers, it is basically an inverse approach with respect to the general information retrieval process. The following part comprises of the elaboration of the web mining problems. 3. INTRODUCTION TO WEB MINING To our knowledge the first time web mining term was coined by (Etzioni, 1996). Etzioni’s work started with a hypothesis that the information content on the web is already structured and outlines the sub task of web mining. There are several categories of web mining including web content mining, web usage mining, and web structure mining. Web content mining explains the process of discovery of information consisting of the Web contents data/documents/images/video. In fact the web content data consists of multiple types like video, textual, audio, Meta data as well as hyperlinks. Mining multiple types of data is termed as multimedia data mining. Therefore, we consider the multimedia data mining as an instance of the web content mining. Web structure mining refers to the topological structures of the hyperlink interconnections. These interconnections can be represented with the help of graph G( V, E) where V is the set of all vertex, and each vertex is a web page, and E is the set of all edges in that graph where each edge represents the forward or backward link of a web page (Huberman B., 2001). An important aspect of Web is that the data that appears on search pages, produced by the search engines, comes from 'crawlers' i.e. web pages are found by the hyperlinks of other web pages. Therefore our picture to World Wide Web is necessarily 'biased'. The search engine gives more weight age to web pages that occupy a higher number of back links. Similarly only those web pages will be shown over the search if some other page points to them as well, which does not represent the true picture of the Web (Lawrence, S., Giles, C. L., 1999) because not all web pages are linked by others. Some search engines provide the privilege of adding web pages to their repository directly; however, this does not cover the major part of the pages available online. Here the importance of 'outliers' can be felt in Web structural mining. It means that Web pages with less connectivity with other pages may also contain valuable information and also have to be considered for further processing. The detailed discussion is performed in section 4.1. In contrast to the web content mining and web structure mining, the web usage mining tries to study the data generated by the Web users and surfer's sessions or behaviors. The web usage mining study, in contrast to the others, works upon the secondary data generated upon the primary or raw web based data. The web usage data includes the browsers logs, user profile data, registration data, web server access logs, proxy server logs, cookies, transactions, e-shopping ( i.e. consumer buying patterns, demographics, purchase history), recommendations and reviews (important for product marketing and customer relationship management systems and used for the generation of product recommendations for potential customers), book marked data and many more. With the popularity of social networking web sites like Facebook, Twitter, Flicker, MySpace, Linkedin etc, users uploaded a huge amount of private and professional information on the web. This caused a tremendous increase in the web based social and professional interactions. Micro blogging sites like Twitter generate a huge amount of data just based on small tweets; however, Face book encompasses a blend of social data, including images, videos, messages, and many more. Such huge amounts of instantaneously increasing data lead researchers and the business community to develop new methods for information retrieval and information extraction from online data sources. 4. Web mining outliers Identification of outliers is likely to be considered an important business strategy in the near future as nowadays companies all over the world look forward to operating via the internet due to which online market surveys have become a vital part in any business plan. Detection of outliers can play a critical role in such situations as an outlying point in a survey of multiple businesses of a similar sort but with varying degrees of success can represent a market capturing factor or an innovative measure taken by the most successful business of the lot. Due to such a promising future, the identification of outliers has become a hot issue in many research circles typically those of computational and statistical sciences. New outlier analysis techniques are proposed everyday, a particular example can be the N-gram system. In this system, the data is sliced into N-grams of equal sizes and then matched in order to identify the outlying anomaly. Such systems are considerably more effective than the conventional word matching techniques, as word matching methodology suffers miserably at the hands of badly spelled words which are abundant in online documents. However the N-gram technique handles such problems effectively and is also considered proficient in detecting different but related worlds by identifying the inflections to the root words. 4.1 Web Structural Topological Mining From now it is possible for a web crawler to further use such information for knowledge creation. For instance, the forward links can be used to develop a web link graph which represents the overall picture of the 'linked status' of web sites. In fact the linked status represents the worth of a web site based on the number of back links and more specifically if authoritative web pages are linking some web page it also increases its worth as well. Following is a snapshot of the web links The above snapshot was generated by (Toyoda M., Kitsuregawa M,, 1999), (Toyoda M., Kitsuregawa M., 2006). Extensive node analysis of such graphs elucidate interesting information about the 'centric web pages' ( i.e. most referred web contents on some topic), 'outliers' ( i.e. those web pages that do not have any relevance with the specified domain and can be seen in the Fig 2, such web sites are present at the corners of the Web structure), influence marketing, and on top of all, Google's page rank algorithm all use such web link graphs to reveal the web pages that are deemed most relevant by the web page creator. These results further associate the user’s query and exclude the outliers at some level. However, outliers have worth in many cases and are mostly underestimated. Like if someone managed to develop a unique product regarding human resource, at the very first day when the web page gets uploaded, it might not have any worth to the crawler so it might not be shown in the search result. Secondly, this is a very specific product and will not have much back links from many web sites or indeed will not have much out links as well. However, establishment of some links with reputed web sites may bring it at the corner of the Web graph. These important, however isolated, web sites can only get consideration while applying outlier identification algorithms over Web graphs. 4.2 Outlier at Web usage mining Web usage data is secondary in nature and generated from shopping cards, social network web sites, recommender systems, Web analytic systems (traffic monitoring systems like Google Analytics, or Alexa from Amazon etc.), consumer purchase history, search patterns etc. Outliers have gained importance in all of such systems with different contexts. For example at social networking web sites like Facebook, or Linkedin, most of the association algorithms generate relevant friend suggestions based on more frequency or closeness with respect to current job, interest or favorites. However, when such systems recommend friends from remote connections without having direct connection it puts an impact over the user and sometimes it catches the user’s interest directly. Similarly, a consumer behavioral system does analyze the buying patterns of its customers. These buying patterns, mostly, governed by the several events or occasions like world cup logo based T-shirts, however, which items have to be promoted. Sometimes, the ‘rarity’ specification is needed over such sites. Thus normal shopping trends show conventional seasonal patterns, however, the rare or unique items can only be recommend with the help of 'outlier mining'. 4.3 Generalized framework for mining web content Outliers Web mining and identification of outliers is not an easy task due to the diversity in the nature of the available Web content. Similar kind of framework can be used to develop relationships within documents and extracting valuable information by removing the outliers. In this paper, we propose the similar methodology for inverse objective, i.e. finding outliers by removing the matched or relevant documents. Here, we restrict our work upon the textual data set by using information retrieval approach (information retrieval approach is presented in detailed in the section 5, with brief description of each component of the framework). Fig 3 represents the overview of the framework. In this section a framework is proposed for the textual web mining and identification of outliers. The framework is aimed at retrieving information from the Web and then extracting knowledge from the collection of huge document sets which is then further used in the classification of text documents. Though, the framework could be used for any type of data, however, so far our experience is limited over textual data sets, therefore limiting the scope to text documents. The framework covers both types of classification i.e. supervised and unsupervised. Supervised classification helps out in relating relevant documents and excluding others, however, because of the wealth of available Web data it is not possible to categorize each topic thus resulting in a collection of unclassified documents. Unsupervised classification approaches solve this problem and relate relevant documents without having any prior knowledge. This further classification classifies the rest of the documents; however, still there will be a junk of documents left unclassified. According to the presented framework and defined boundary, the remaining documents will be treated as the outliers. The presented framework has a limited scope and it is based on web crawling and information extraction. The seed key words will be used to dig out top search pages from the known search engines like Google, Bing, yahoo etc. This bulk of the top pages provide collection of the documents to the crawler to perform exhaustive search, from here the crawler starts retrieving web pages and saving their textual data in separate documents. Fig 3. Framework for the identification of Outliers Limitations of the presented framework: The presented framework has some limitations: Documents containing images or videos cannot be considered, the framework is only limited to textual data. Results will be different while choosing different algorithms for supervised and unsupervised learning. Web content with low matches have higher tendency to be marked as outliers, and may not be given the deserved attention. Lack of supplied sample data (for training data) for supervised classification may further restrict the performance of the overall system. 5. INFORMATION RETRIEVAL The growing amount of data over the web has created a great need of efficient methods of information extraction, information classification and categorization, ranking methods, conceptual classification, and finally the conceptual retrieval of data based on user query. This section The web based information retrieval systems comprises on many components including crawler, indexer, classification mechanism (either supervised or unsupervised) and finally conceptual mapping. A brief description of each component is discussed in the following section. 5.1 Web Crawling Web crawling deals with automatic content extraction from the web based on some seed URLs. These URLs could be used as seeds for other search engines and generate thousands of search results. These search results may be used further for web content gathering. Google, Bing, Yahoo and other search engines offer APIs for web content extraction. A crawler then extracts web pages and then further processing upon them separates the forward links, ads, images, texts, Meta data, and other tags. The stemmer reduces the words down to their root word forms like working to work. 5.2 Text Mining Extraction of more than information (i.e. knowledge) from the data can be termed as 'mining'. Mapping relationships among data sets whose relevance is based upon variable criteria and further utilizing the identified relationships for higher level objectives could be a better description of the term 'mining'. The huge amount of text based data on the Web does not allow manual analysis rather it has to be analyzed and categorized automatically which could be used for further information generation. Text mining or generically saying Text analysis performs quick mining processes over hundreds and millions of text documents to establish relevancy among them which could be used for the development of the association rules among user search queries and the respective web content. Text mining approaches rely heavily upon word frequency based methods and mostly use TF-IDF algorithm for the development of VSM. However, the fake sites only use top key words or search words and junk hyperlinks to attract the attention of search engines. Computational advertisement groups have to pay attention to outline such fake sites out of the millions of the other web pages. The text mining approaches use term frequency for document cluster classification, whereas, sometimes the features present in the text document can only be represented through 'least frequently occurred unique terms'. These documents containing the 'least frequently occurred', hence unique terms, can be seen as outliers out of the million documents. Therefore inverse document frequency method can be used to find the outliers and these outliers may represent very unique information content within them. 5.3 Image Mining, Video Mining In contrast to text mining approaches, mining images from web data is still a challenging problem for the community. Out of the several hundred thousand images it is still far from reality to categorize them with relevance. The trivial approach for image retrieval is based on key word matching, (Zhang ,2001) added a low level image feature with the semantic data for the image retrieval. Establishing relationships based on the image relevance is still in its infancy (Yanai, 2003). However, there are several methods that have been proposed for the web image mining like in (Yanai, Barnard, 2006) collected about 40,000 images from the web representing 150 concepts. Similar to the scarcity of reasonable image retrieval methods the concept extraction from videos is still very far. Video mining contains more complexities than the image mining problem. Mining image and videos from web is the first challenge and then development of the efficient algorithms for image and video classification will come afterwards. Therefore, in this paper only text based web mining issues are being discussed and the proposed framework does not consider the image or video web content. Finding outliers is not a relative problem for this domain yet, indeed with the progress in image and video mining methods the importance of finding outlier data will grow subsequently. 5.4 Curse of Dimensionality In the Vector Space Model (VSM) approach for document representation during Text Mining each term represents a vector and its magnitude represents the strength or frequency of that term within the collection of documents. If more terms are common within the document, the Text Mining methods consider them as relevant document on the basis of term frequency. However, the situation gets worst when several thousand documents has to be mine and each document represents several thousand words, therefore the VSM get more complex and it demands more space to work upon subsequently more time needed to complete the required processing. Singular value decomposition (SVD) and principle component analysis (PCA) are two famous methods widely in use for reduction of high dimension space into lower dimensions. The PCA and SVD work with their own limitations, however, otherwise it is not possible to perform classification and learning processes (either supervised or unsupervised) in a reasonable time. Parsing Options Reduce Issues For The Dimensionality Reduction Text mining tools include other techniques that reduce the problem complexity of high dimensionality search space. These methods include several variations, however, here some general techniques are discussed as follows: a. Removal of Stop Words: the fixed predetermined terms in a document have to be removed, which represents the major part of the documents. This removal of stop words from the documents simplifies the dimension reduction problem a bit. b. Stemming: Identifying the root terms out of the many, which simplifies the vector space, hence simplifying the text mining job. These steps not only reduce the complexity of vector space model, however, also result into the improvement of time complexity of the overall system. c. Synonyms List: The pre-specified terms have to be mapped into the representative terms. d. Removal of Numeric Terms: Remove all terms that contain digits as meaningless in Text Mining. e. Removal of Single Occurring; Remove all terms that occur in no more that one document. It does not contribute in the classification of documents as there is no relevance with others. 5.5 MACHINE LEARNING Machine learning emphasizes on an automatic method for learning. In other words, the goal of machine learning algorithm is to devise a method which learns automatically without human assistance. In a web mining system, automatic classification of text, video and images is needed based on their content similarity. Machine learning methods provide approaches for complex classification problems based on supervised and unsupervised learning. A typical learning mechanism is as follows: 5.5.1 Supervised Learning The goal of supervised learning is to build a method which learns general model form externally supplied instances. During the supervised learning method, classifier has to be find for the classification of the data. The classifier in result further use to assign class labels to the testing data sets where the values of the predictor features are known, however, the class label is unknown. A generalized supervised ML process is presented in ( Kotsiantis ,2007) the fig 5. The first step is the selection of the data set or the specific attributes, in context of document classification the selection of representative terms or words is a difficult task. In later section inverse document frequency method presented for Importance Weight selection. The second step is to deal with the missing data. (Hodge & Austin 2004) presented a survey of techniques for outliers or noise detection. In algorithm black box different supervised machine learning algorithm can be used. There exist several supervised learning algorithms including support vector machines (SVMs), Neural Networks (NN), logistic regression, decision trees, bagged trees, boosted trees, naive bayes and others. SVM is widely being used for text classification because of its efficient nature. Implementation of SVMs includes several steps, 'importance weights' are often use for dimensionality reduction. In fact importance weight originally designed for identifying index word by (Salton, McGill, 1983) like inverse document frequency (idf). For SVMs, idf is the most commonly use Importance Weight. The idf of a type in a type ? in a text document collection consisting of N docs is defined by idf k = log N /dfk here dfk is the member of the document collection, which contains the term word ?k SVM plays around the notion of 'margin' as shown in the fig 6 either side of the hyperplane that separates two data classes. Book by (Christianini , Shawe-Tayor , 2000) presents the SVM functionality in details. If the training data set is linearly separable then (w, b) exists wT xi + b ? 1, for all xi Є P wT xi + b ? -1, for all xi Є N With the decision function given by fw, b (x) = sgn (wTX + b) where b the bias and w is termed the weight vector 5.5.2 Unsupervised Learning Unsupervised learning approaches do not require any prior training data set for classification. Clustering algorithms (Jain A. K. et.al, 1999) widely use for classification and grouping of relevant data without any training activity. Nearest Neighborhood algorithm, K means algorithm are experienced within our framework. Web contain huge amount of data with a big variety, so its not possible to define all classes in advance, therefore, the scope of supervised classification is limited. Unsupervised learning approaches works when documents could not be classified through the defined categories or classes. 6. Other Approaches for the Identification of Web Outliers The web is continuously growing at an exponential rate into a colossal repository of information with accessibility options for all public to perform research and develop new business opportunities. With the recent advancement of several online business oriented activities, the Web has drawn the attention of computational scientists towards extraction of knowledge from the raw collection of text documents. Extraction of meaningful knowledge from text documents online is a process that involves several steps that range from ranking top content pages for some topic (for computational advertisement), finding consumer patterns for product recommendations, generating recommendations over social networking web sites, or brand recognition over Twitter activities. All such processes involve a series of processes to be performed on a huge amount of Web data which has to be further classified and then the irrelevant content has to be distinguished. Such efforts have drawn the attention of scientists to work upon finding better methods for outlining irrelevant or outlier Web content. Identification and elimination of outliers or in simpler words irrelevant data are important components of the data mining process as meaningful data cannot be extracted unless the irrelevant data or noise has been removed. Data is dubbed as outlier on the basis of its divergence from the regular document pattern i.e. unrelated data is any thing that has has got nothing to do with the required query, however an outlier can also be the most important data fragment in the whole lot as it may represent an anomaly in the required data set for example a disastrous climatic shift can be considered irrelevant and ignored during a careless outlier analysis phase, even though it would be perhaps the most crucial piece of information in the whole document. Due to such an uncertain nature of outliers, determination of the best outlier identification method has always been somewhat debatable. There are a number of methodologies (Kaur A.,et. al., 2007) (Agyemang M., Barker K., Alhaj R., 2005) in practice today, each has its own advantages depending on the requirements, some of which are mentioned below, i. Distance based algorithms These algorithms utilize the ranking system by awarding ranks to all the points on the basis of their distance from a particular kth neighbor, and then ordering of those points on the basis of awarded ranks. The top most n number of points are considered to be outliers. The distance based algorithms can be sub-categorized into the Index Based Approach, Cell Based Approach, and the Nested Loop Approach ii. Depth based algorithms In the depth based approach, the data is arranged in the form of convex hull layers, and the layers are “peeled” one after the other. The data sets with shallow depth values are dubbed as outliers. However depth based algorithms are not as effective when used for larger data sets. iii. Density based algorithms Points are graded in terms of the local outlying factor (LOF) which relies heavily on the local neighborhood density. High factors represent outlying points. iv. Distribution based algorithms These algorithms utilize a statistical approach in the determination of outlying points by implementing distribution models on the data and then identifying the outlying points which are evident as they deviate from the model structure. However sometimes implementation of standard distribution models on the data sets is costly in terms of computation and difficult due to which unsatisfactory results may be generated. v. Deviation based algorithms This approach studies the general characteristics of the relevant data sets as a single entity or group and considers the data sets that deviate from these properties, to be outlying anomalies. 7. CONCLUSION Web mining is a vast field centered on information extraction from Web data. Web mining is categorized into three different categories including i. Web content mining (including videos, audio, text etc), ii. Web usage mining (consumer purchase behavior, web surfing patterns, brand management through marketing surveys etc), iii. Web structural data (I.e. how web pages are connected with each other through hyperlinks. It represents a huge graph as discussed earlier). This paper discussed the benefits and mining techniques from the three categories and specifically presented a generalized framework for Web content mining. The importance of 'outliers' within the three categories is different and separate methods have to be employed for each category. Outliers are data sets that are not directly relevant with the user search or deviated from the user consent or could not be fit in the defined criteria. Finding outliers during the Web mining process is a challenging task whether finding authoritative Web blogs or generating most relevant search based on the provided query by user. Identifying the outliers from the Web mining process is one of the least addressed topic within the Web based knowledge extraction studies, and its importance is growing quite rapidly with the increase of revenue based business through Web data. The significance of mining outliers during Web usage data is more challenging than the rest of the two categories. The outliers identification methods should be more intuitive as compare to the other two domains. A lot of work has to be done over each problem domain for the Web usage mining; it means that within the Web usage mining, different methods have to be developed for finding outliers for consumer behaviors, and recommender systems. The paper presented a generalized framework for the Web content mining process and focused on textual data with the help of information retrieval and information extraction approaches. Apart from all the available types of Web content like video, audio, text etc the framework is so far only experienced over text based data for knowledge extraction. The framework for the Web content mining consists of Web crawler, Classification engine and Outlier extractor. The crawler is responsible for extracting top Web pages with the help of the search engines. These repositories of Web pages, after some process, are provided to the classification engine. The classification engine consists of two major components i.e. supervised classification and unsupervised classification. Supervised classification tools like SVM, bagged trees, decision trees and others can be used. We experienced SVM and presented a brief description in the section of machine learning. For unsupervised learning, clustering algorithms are found appropriate. Web content that could not be classified either with the help of the provided categories or the clustering methods will be treated as the outliers. Though, this is the most trivial method for mining relationships within the collection of documents, however, intuitively it is a good approach for mining outliers as well. The framework inverts the effects of all classification processes, either supervised or unsupervised and at the end of the process results into a set of outlying documents. Further enhancement can be done later on and specialized techniques can be built over the discussed platform later on. 8. REFERENCES Agyemang M., Barker K, Alhajj R. (2005) Mining Web Content Outliers using Structure Oriented Weighting Techniques and N-Grams” SAC '5, March 13-17, Santa Fe, New Mexico, pp 482-487. Cristianini, N., Shawe-Taylor, (2000) An Introduction to Support Vector Machines and Other Kernal Based Learning Methods. Cambridge University Press, Cambridge Etzioni O. (1996). Moving up the information food chain: Deploying softbots on the world wide web. In Proceedings of the 13 National Conference on Artificial Intelligence (AAAI-96), pp 1322-1326, AAAI Press. Hodge, V, Austin. (2004). A survey of Outlier Detection Methodologies.. Artificial Intelligence Review, vol 22, Issue 2 pp 85-126 Huberman, B,A .(2001), The laws of the Web,MIT Press Cambridge, MA. Jian A.K., Murty M. N., Flynn P. J. (1999), Data Clustering: A Review. ACM Computing Surveys, Vol, 31, No 3. Kaur A.,et.al (2007) The art of outlier detection in streaming data. IADIS European Conference Daa Ming pp 209-212. Kotsiantis S. (2007) Supervised Machine Learning: A Review of Classification Informatica 31 pp 249-268 Lawrence, S., and Giles, C, L,,Accessibility of information on the web, Nautre, 400, pp 107-109. Salton, G., McGill.(1983) Introduction to modern information retrieval. New York: McGraw Hill . Toyoda M., Kitsuregawa M. (1999). Web community chart: a tool for navigating the web and observing its evolution. IEICE Transl Fundamentals, vol-E82-77, No, 1 Jan 1999. Toyoda M., Kitsuregawa M (2003). Whats really new on the web? Identifying new pages from a series of unstable web snapshots. International World Wide Web conference WWW Conference IW3C, ACM -May 23-26 pp 233-241 Yanai, K. Web image mining: can we gather visual knowledge for image recognition from the Web?”. Information, Communications and Signal Processing, and Fourth Pacific Rim Conference on Multimedia pg 186-190 vol 1. Yani K., Barnard K. Finding visual concepts by web image mining Proceeding of the 15th International Conference on World Wide Web WWW '06 Zheng C.. Liu W., Feng Z., Mingjing L.(2001).Web mining for web image retrieval. Journal of the American Society for Information Science and Technology-Visual based retrieval systems and web mining Vol 52 Issue 10. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(“Generalized Framework For Mining Web Content Outliers Research Paper”, n.d.)
Retrieved de https://studentshare.org/information-technology/1390483-web-content-outlier-mining-using-web-data-sets-to
(Generalized Framework For Mining Web Content Outliers Research Paper)
https://studentshare.org/information-technology/1390483-web-content-outlier-mining-using-web-data-sets-to.
“Generalized Framework For Mining Web Content Outliers Research Paper”, n.d. https://studentshare.org/information-technology/1390483-web-content-outlier-mining-using-web-data-sets-to.
  • Cited: 0 times

CHECK THESE SAMPLES OF Generalized Framework For Mining Web Content Outliers

Internal Marketing: Knowledge Management

This essay reviews the impact of the web, knowledge management and information technology on promoting business online.... Business and marketing analyses, running concurrently with the technical development of new-product ideas, may use ICT in the form of data mining to identify where there is a likely demand for the new product from existing customers....
22 Pages (5500 words) Essay

Critical appraisal two research studies

If research could be defined as above then on extrapolation, critical appraisal of a research paper could be described as a third umpire who sees the things in a different light and shade, by dissecting them appropriately.... Thus it becomes necessary to subject published research… These two articles related to treatment for varicose vein, with the first research study by Darke and Baker explores the efficacy of the technique of Ultrasound guided foam scleropathy in the treatment of varicose The second article, Simon John Palfrey man tends to get on the psychological state of the patients, trying to clinically establishing the expectations and reasons for patients, seeking the treatment....
12 Pages (3000 words) Essay

Why collaboration is so important in decision making in institutions of higher learning

?? This question can be broken down into manageable parts by asking: An exploration of these questions provides a framework for understanding the… This essay is laid out in three basic parts.... The third section of this essay touches upon ways web 2.... Gloria Ladson-Billings' framework (2006; 2005; 2001; 1996) informs the discussion along with a multitude of others who have opinions on the subject....
30 Pages (7500 words) Essay

Research Methodologies in Education and professional practice

Affecting the integration of children in the new culture and environment are the parent's economic insecurity, the inaccessibility of social services, and the social and… Children in migration face challenges related to health, economic security, job opportunities and education. Learning the language is one of the foundations for successful inclusion and integration of immigrants....
20 Pages (5000 words) Research Paper

Management information systems

Application of new technology: Gissaral Electronic new information system and information technology platforms will offer effective corporate information management.... Since, any business can completely depend on the business information gathering and operational data handling.... hellip; saral Electronic business strategic principles and initiatives are about the entire information system policy management and effective management of business....
14 Pages (3500 words) Essay

How the Fourth Revolution is Changing the World and How Newspapers Must Take Advantage of It

The study will use the interpretive stance on phenomenological research in order to provide the framework in which the study is developed.... Online social networks are being used for all kinds of businesses in order to provide information to potential users and customers that will increase… The newspaper business is in peril of becoming obsolete because of the nature of the information that is available online....
35 Pages (8750 words) Essay

Enterprise Architecture Technique

hellip; Enterprise Architecture is a theoretical framework that outlines the structure of an organization.... Basically, an EA provides a broad depiction for the comparison of the present status and projected future position of a business organization with respect to the appropriate framework and their associations....
19 Pages (4750 words) Research Paper

Analysis of Lidl's advertisement strategy

The dissertation "Analysis of Lidl's advertisement strategy" investigates how closely advertising messages of Lidl gets reflected in the products and services that they offer to their customers.... Reportedly, advertising has always been considered as an effective instrument of communication....
32 Pages (8000 words) Dissertation
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us