注册 | 登录读书好,好读书,读好书!
读书网-DuShu.com
当前位置: 首页出版图书科学技术计算机/网络数据库数据库挖掘/数据仓库Web数据挖掘:超文本数据的知识发现(英文版)

Web数据挖掘:超文本数据的知识发现(英文版)

Web数据挖掘:超文本数据的知识发现(英文版)

定 价:¥59.00

作 者: (印)查凯莱巴蒂 著
出版社: 人民邮电出版社
丛编项: 图灵原版计算机科学系列
标 签: 数据仓库与数据挖掘

购买这本书可以去


ISBN: 9787115194046 出版时间: 2009-02-01 包装: 平装
开本: 16开 页数: 344 字数:  

内容简介

  本书是信息检索领域的名著,深入讲解了从大量非结构化Web数据中提取和产生知识的技术。书中首先论述了Web的基础(包括Web信息采集机制、Web标引机制以及基于关键字或基于相似性搜索机制),然后系统地描述了Web挖掘的基础知识,着重介绍基于超文本的机器学习和数据挖掘方法,如聚类、协同过滤、监督学习、半监督学习,最后讲述了这些基本原理在Web挖掘中的应用。本书为读者提供了坚实的技术背景和最新的知识。本书是从事数据挖掘学术研究和开发的专业人员理想的参考书,同时也适合作为高等院校计算机及相关专业研究生的教材。

作者简介

  Soumen Chakrabarti,Web搜索与挖掘领域的知名专家,ACM Transactions on the Web副主编。加州大学伯克利分校博士,目前是印度理工学院计算机科学与工程系副教授。曾经供职于IBM Almaden研究中心,从事超文本数据库和数据挖掘方面的工作。他有丰富的实际项目开发经验,开发了多个Web挖掘系统,并获得了多项美国专利。

图书目录

INTRODUCTION
1.1 Crawling and Indexing
1.2 Topic Directories
1.3 Clustering and Classification
1.4 Hyperlink Analysis
1.5 Resource Discovery and Vertical Portals
1.6 Structured vs. Unstructured Data Mining
1.7 Bibliographic Notes
PART Ⅰ INFRASTRUCTURE
2  CRAWLING THE WEB
2.1 HTML and HTTP Basics
2.2 Crawling Basics
2.3 Engineering Large-Scale Crawlers
2.3.1 DNS Caching, Prefetching, and Resolution
2.3.2 Multiple Concurrent Fetches
2.3.3 Link Extraction and Normalization
2.3.4 Robot Exclusion
2.3.5 Eliminating Already-Visited URLs
2.3.6 Spider Traps
2.3.7 Avoiding Repeated Expansion of Links on Duplicate Pages
2.3.8 Load Monitor and Manager
2.3.9 Per-Server Work-Queues
2.3.10 Text Repository
2.3.11 Refreshing Crawled Pages
2.4 Putting Together a Crawler
2.4.1 Design of the Core Components
2.4.2 Case Study: Using w3c-1 i bwww
2.5 Bibliographic Notes
3 WEB SEARCH AND INFORMATION RETRIEVAL
3.1 Boolean Queries and the Inverted Index
3.1.1 Stopwords and Stemming
3.1.2 Batch Indexing and Updates
3.1.3 Index Compression Techniques
3.2 Relevance Ranking
3.2.1 Recall and Precision
3.2.2 The Vector-Space Model
3.2.3 Relevance Feedback and Rocchios Method
3.2.4 Probabilistic Relevance Feedback Models
3.2.5 Advanced Issues
3.3 Similarity Search
3.3.1 Handling "Find-Similar" Queries
3.3.2 Eliminating Near Duplicates via Shingling
3.3.3 Detecting Locally Similar Subgraphs of the Web
3.4 Bibliographic Notes
PART Ⅱ LEARNING
SIMILARITY AND CLUSTERING
4.1 Formulations and Approaches
4.1.1 Partitioning Approaches
4.1.2 Geometric Embedding Approaches
4.1.3 Generative Models and Probabilistic Approaches
4.2 Bottom-Up and Top-Down Partitioning Paradigms
4.2.1 Agglomerative Clustering
4.2.2 The k-Means Algorithm
4.3 Clustering and Visualization via Embeddings
4.3.1 Self-Organizing Maps (SOMs)
4.3.2 Multidimensional Scaling (MDS) and FastMap
4.3.3 Projections and Subspaces
4.3.4 Latent Semantic Indexing (LSI)
4.4 Probabilistic Approaches to Clustering
4.4.1 Generative Distributions for Documents
4.4.2 Mixture Models and Expectation Maximization (EM)
4.4.3 Multiple Cause Mixture Model (MCMM)
4.4.4 Aspect Models and Probabilistic LSI
4.4.5 Model and Feature Selection
4.5 Collaborative Filtering
4.5.1 Probabilistic Models
4.5.2 Combining Content-Based and Collaborative Features
4.6 Bibliographic Notes
5 SUPERVISED LEARNING
5.1 The Supervised Learning Scenario
5.2 Overview of Classification Strategies
5.3 Evaluating Text Classifiers
5.3.1 Benchmarks
5.3.2 Measures of Accuracy
5.4 Nearest Neighbor Learners
5.4.1 Pros and Cons
5.4.2 Is TFIDF Appropriate?
5.5 Feature Selection
5.5.1 Greedy Inclusion Algorithms
5.5.2 Truncation Algorithms
5.5.3 Comparison and Discussion
5.6 Bayesian Learners
5.6.1 Naive Bayes Learners
5.6.2 Small-Degree Bayesian Networks
5.7 Exploiting Hierarchy among Topics
5.7.1 Feature Selection
5.7.2 Enhanced Parameter Estimation
5.7.3 Training and Search Strategies
5.8 Maximum Entropy Learners
5.9 Discriminative Classification
5.9.1 Linear Least-Square Regression
5.9.2 Support Vector Machines
5.10 Hypertext Classification
5.10.1 Representing Hypertext for Supervised Learning
5.10.2 Rule Induction
5.11 Bibliographic Notes
6 SEMISUPERVISED LEARNING
6.1 Expectation Maximization
6.1.1 Experimental Results
6.1.2 Reducing the Belief in Unlabeled Documents
6.1.3 Modeling Labels Using Many Mixture Components
……
PART Ⅲ APPLICATIONS

本目录推荐