
Instance Selection and Construction for Data Mining
By: Huan Liu (Editor), Hiroshi Motoda (Editor)
Hardcover | 28 February 2001
At a Glance
452 Pages
23.5 x 16.51 x 3.18
Hardcover
$249.00
or 4 interest-free payments of $62.25 with
orShips in 5 to 7 business days
One of the major means of instance selection is sampling whereby a sample is selected for testing and analysis, and randomness is a key element in the process. Instance selection also covers methods that require search. Examples can be found in density estimation (finding the representative instances - data points - for a cluster); boundary hunting (finding the critical instances to form boundaries to differentiate data points of different classes); and data squashing (producing weighted new data with equivalent sufficient statistics). Other important issues related to instance selection extend to unwanted precision, focusing, concept drifts, noise/outlier removal, data smoothing, etc.
Instance Selection and Construction for Data Mining brings researchers and practitioners together to report new developments and applications, to share hard-learned experiences in order to avoid similar pitfalls, and to shed light on the future development of instance selection. This volume serves as a comprehensive reference for graduate students, practitioners and researchers in KDD.
| Foreword | p. xi |
| Preface | p. xiii |
| Acknowledgments | p. xv |
| Contributing Authors | p. xvii |
| Background and Foundation | |
| Data Reduction via Instance Selection | p. 3 |
| Background | p. 3 |
| Major Lines of Work | p. 7 |
| Evaluation Issues | p. 11 |
| Related Work | p. 13 |
| Distinctive Contributions | p. 14 |
| Conclusion and Future Work | p. 18 |
| Sampling: Knowing Whole from Its Part | p. 21 |
| Introduction | p. 21 |
| Basics of Sampling | p. 22 |
| General Considerations | p. 23 |
| Categories of Sampling Methods | p. 26 |
| Choosing Sampling Methods | p. 36 |
| Conclusion | p. 37 |
| A Unifying View on Instance Selection | p. 39 |
| Introduction | p. 39 |
| Focusing Tasks | p. 40 |
| Evaluation Criteria for Instance Selection | p. 43 |
| A Unifying Framework for Instance Selection | p. 45 |
| Evaluation | p. 49 |
| Conclusions | p. 52 |
| Instance Selection Methods | |
| Competence Guided Instance Selection for Case-Based Reasoning | p. 59 |
| Introduction | p. 59 |
| Related Work | p. 60 |
| A Competence Model for CBR | p. 64 |
| Competence Footprinting | p. 66 |
| Experimental Analysis | p. 69 |
| Current Status | p. 74 |
| Conclusions | p. 74 |
| Identifying Competence-Critical Instances for Instance-Based Learners | p. 77 |
| Introduction | p. 77 |
| Defining the Problem | p. 78 |
| Review | p. 82 |
| Comparative Evaluation | p. 89 |
| Conclusions | p. 91 |
| Genetic-Algorithm-Based Instance and Feature Selection | p. 95 |
| Introduction | p. 95 |
| Genetic Algorithms | p. 97 |
| Performance Evaluation | p. 104 |
| Effect on Neural Networks | p. 108 |
| Some Variants | p. 109 |
| Concluding Remarks | p. 111 |
| The Landmark Model: An Instance Selection Method for Time Series Data | p. 113 |
| Introduction | p. 114 |
| The Landmark Data Model and Similarity Model | p. 118 |
| Data Representation | p. 125 |
| Conclusion | p. 128 |
| Use of Sampling Methods | |
| Adaptive Sampling Methods for Scaling Up Knowledge Discovery Algorithms | p. 133 |
| Introduction | p. 134 |
| General Rule Selection Problem | p. 136 |
| Adaptive Sampling Algorithm | p. 138 |
| An Application of AdaSelect | p. 142 |
| Concluding Remarks | p. 149 |
| Progressive Sampling | p. 151 |
| Introduction | p. 152 |
| Progressive Sampling | p. 153 |
| Determining An Efficient Schedule | p. 155 |
| Detecting Convergence | p. 161 |
| Adaptive Scheduling | p. 162 |
| Empirical Comparison of Sampling Schedules | p. 163 |
| Discussion | p. 167 |
| Conclusion | p. 168 |
| Sampling Strategy for Building Decision Trees from Very Large Databases Comprising Many Continuous Attributes | p. 171 |
| Introduction | p. 171 |
| Induction of Decisionr Trees | p. 172 |
| Local Sampling Strategies for Decision Trees | p. 175 |
| Experiments | p. 182 |
| Conclusion and Future Work | p. 186 |
| Incremental Classification Using Tree-Based Sampling for Large Data | p. 189 |
| Introduction | p. 190 |
| Related Work | p. 192 |
| Incremental Classification | p. 193 |
| Sampling for Incremental Classification | p. 198 |
| Empirical Results | p. 201 |
| Unconventional Methods | |
| Instance Construction via Likelihood-Based Data Squashing | p. 209 |
| Introduction | p. 210 |
| The LDS Algorithm | p. 213 |
| Evaluation: Logistic Regression | p. 215 |
| Evaluation: Neural Networks | p. 221 |
| Iterative LDS | p. 222 |
| Discussion | p. 224 |
| Learning via Prototype Generation and Filtering | p. 227 |
| Introduction | p. 228 |
| Related Work | p. 228 |
| Our Proposed Algorithm | p. 235 |
| Empirical Evaluation | p. 239 |
| Conclusions and Future Work | p. 241 |
| Instance Selection Based on Hypertuples | p. 245 |
| Introduction | p. 246 |
| Definitions and Notation | p. 247 |
| Merging Hypertuples while Preserving Classification Structure | p. 249 |
| Merging Hypertuples to Maximize Density | p. 253 |
| Selection of Reprentative Instances | p. 257 |
| NN-Based Classification Using Representative Instances | p. 258 |
| Experiment | p. 259 |
| Summary and Conclusion | p. 260 |
| KBIS: Using Domain Knowledge to Guide Instance Selection | p. 263 |
| Introduction | p. 264 |
| Motivation | p. 266 |
| Methodology | p. 267 |
| Experimental Setup | p. 274 |
| Analysis and Evaluation | p. 275 |
| Conclusions | p. 277 |
| Instance Selection in Model Combination | |
| Instance Sampling for Boosted and Standalone Nearest Neighbor Classifiers | p. 283 |
| The Introduction | p. 284 |
| Related Research | p. 286 |
| Sampling for A Standalone Nearest Neighbor Classifier | p. 288 |
| Coarse Reclassification | p. 290 |
| A Taxonomy of Instance Types | p. 294 |
| Conclusions | p. 297 |
| Prototype Selection Using Boosted Nearest-Neighbors | p. 301 |
| Introduction | p. 302 |
| From Instances to Prototypes and Weak Hypotheses | p. 305 |
| Experimental Results | p. 310 |
| Conclusion | p. 316 |
| DAGGER: Instance Selection for Combining Multiple Models Learnt from Disjoint Subsets | p. 319 |
| Introduction | p. 320 |
| Related Work | p. 321 |
| The DAGGER Algorithm | p. 323 |
| A Proof | p. 327 |
| The Experimental Method | p. 329 |
| Results | p. 330 |
| Discussion and Future Work | p. 334 |
| Applications of Instance Selection | |
| Using Genetic Algorithms for Training Data Selection in RBF Networks | p. 339 |
| Introduction | p. 340 |
| Training Set Selection: A Brief Review | p. 340 |
| Genetic Algorithms | p. 342 |
| Experiments | p. 344 |
| A Real-World Regression Problem | p. 348 |
| Conclusions | p. 354 |
| An Active Learning Formulation for Instance Selection with Applications to Object Detection | p. 357 |
| Introduction | p. 358 |
| The Theoretical Formulation | p. 359 |
| Comparing Sample Complexity | p. 363 |
| Instance Selection in An Object Detection Scenario | p. 370 |
| Conclusion | p. 373 |
| Filtering Noisy Instances and Outliers | p. 375 |
| Introduction | p. 376 |
| Background and Related Work | p. 377 |
| Noise Filtering Algorithms | p. 379 |
| Experimental Evaluation | p. 386 |
| Summary and Further Work | p. 391 |
| Instance Selection Based on Support Vector Machine | p. 395 |
| Introduction | p. 396 |
| Support Vector Machines | p. 397 |
| Instance Discovery Based on Support Vector Machines | p. 398 |
| Application to The Meningoencephalitis Data Set | p. 401 |
| Discussion | p. 406 |
| Conclusions | p. 407 |
| Meningoencepalitis Data Set | p. 410 |
| Index | p. 413 |
| Table of Contents provided by Syndetics. All Rights Reserved. |
ISBN: 9780792372097
ISBN-10: 0792372093
Series: KLUWER INTERNATIONAL SERIES IN ENGINEERING AND COMPUTER SCIENCE
Published: 28th February 2001
Format: Hardcover
Language: English
Number of Pages: 452
Audience: General Adult
Publisher: Springer Nature B.V.
Country of Publication: US
Dimensions (cm): 23.5 x 16.51 x 3.18
Weight (kg): 0.86
Shipping
| Standard Shipping | Express Shipping | |
|---|---|---|
| Metro postcodes: | $9.99 | $14.95 |
| Regional postcodes: | $9.99 | $14.95 |
| Rural postcodes: | $9.99 | $14.95 |
Orders over $79.00 qualify for free shipping.
How to return your order
At Booktopia, we offer hassle-free returns in accordance with our returns policy. If you wish to return an item, please get in touch with Booktopia Customer Care.
Additional postage charges may be applicable.
Defective items
If there is a problem with any of the items received for your order then the Booktopia Customer Care team is ready to assist you.
For more info please visit our Help Centre.
You Can Find This Book In

Linguistic Data Science and the English Passive
Modeling Diachronic Developments and Regional Variation
Hardcover
RRP $190.00
$167.99
OFF























