
Dissimilarity Representation For Pattern Recognition, The
Foundations And Applications
By: Elzbieta Pekalska, Robert P W Duin
Hardcover | 7 December 2005
At a Glance
636 Pages
15.5 x 23.6 x 3.6
Hardcover
RRP $486.99
$438.75
10%OFF
or 4 interest-free payments of $109.69 with
orShips in 15 to 25 business days
This book provides a fundamentally new approach to pattern recognition in which objects are characterized by relations to other objects instead of by using features or models. This 'dissimilarity representation' bridges the gap between the traditionally opposing approaches of statistical and structural pattern recognition.Physical phenomena, objects and events in the world are related in various and often complex ways. Such relations are usually modeled in the form of graphs or diagrams. While this is useful for communication between experts, such representation is difficult to combine and integrate by machine learning procedures. However, if the relations are captured by sets of dissimilarities, general data analysis procedures may be applied for analysis.With their detailed description of an unprecedented approach absent from traditional textbooks, the authors have crafted an essential book for every researcher and systems designer studying or developing pattern recognition systems.
| Preface | p. vii |
| Notation and basic terminology | p. xi |
| Abbreviations | p. xix |
| Introduction | p. 1 |
| Recognizing the pattern | p. 1 |
| Dissimilarities for representation | p. 2 |
| Learning from examples | p. 4 |
| Motivation of the use of dissimilarity representations | p. 8 |
| Relation to kernels | p. 13 |
| Outline of the book | p. 14 |
| In summary | p. 16 |
| Spaces | p. 23 |
| Preliminaries | p. 25 |
| A brief look at spaces | p. 28 |
| Generalized topological spaces | p. 32 |
| Generalized metric spaces | p. 46 |
| Vector spaces | p. 56 |
| Normed and inner product spaces | p. 62 |
| Reproducing kernel Hilbert spaces | p. 69 |
| Indefinite inner product spaces | p. 71 |
| Reproducing kernel Krein spaces | p. 85 |
| Discussion | p. 87 |
| Characterization of dissimilarities | p. 89 |
| Embeddings, tree models and transformations | p. 90 |
| Embeddings | p. 90 |
| Distorted metric embeddings | p. 95 |
| Tree models for dissimilarities | p. 95 |
| Useful transformations | p. 99 |
| Transformations in semimetric spaces | p. 99 |
| Direct product spaces | p. 102 |
| Invariance and robustness | p. 103 |
| Properties of dissimilarity matrices | p. 105 |
| Dissimilarity matrices | p. 105 |
| Square distances and inner products | p. 116 |
| Linear embeddings of dissimilarities | p. 118 |
| Euclidean embedding | p. 118 |
| Correction of non-Euclidean dissimilarities | p. 120 |
| Pseudo-Euclidean embedding | p. 122 |
| Generalized average variance | p. 124 |
| Projecting new vectors to an embedded space | p. 125 |
| Reduction of dimension | p. 127 |
| Reduction of complexity | p. 128 |
| A general embedding | p. 129 |
| Spherical embeddings | p. 130 |
| Spatial representation of dissimilarities | p. 132 |
| FastMap | p. 133 |
| Multidimensional scaling | p. 135 |
| Reduction of complexity | p. 143 |
| Summary | p. 144 |
| Learning approaches | p. 147 |
| Traditional learning | p. 148 |
| Data bias and model bias | p. 148 |
| Statistical learning | p. 151 |
| Inductive principles | p. 154 |
| Empirical risk minimization (ERM) | p. 156 |
| Principles based on Occam's razor | p. 160 |
| Why is the statistical approach not good enough for learning from objects? | p. 163 |
| The role of dissimilarity representations | p. 166 |
| Learned proximity representations | p. 171 |
| Dissimilarity representations: learning | p. 172 |
| Classification in generalized topological spaces | p. 175 |
| Classification in dissimilarity spaces | p. 180 |
| Characterization of dissimilarity spaces | p. 180 |
| Classifiers | p. 185 |
| Classification in pseudo-Euclidean spaces | p. 196 |
| On generalized kernels and dissimilarity spaces | p. 205 |
| Connection between dissimilarity spaces and pseudo-Euclidean spaces | p. 209 |
| Discussion | p. 211 |
| Dissimilarity measures | p. 215 |
| Measures depending on feature types | p. 216 |
| Measures between populations | p. 228 |
| Normal distributions | p. 228 |
| Divergence measures | p. 229 |
| Discrete probability distributions | p. 233 |
| Dissimilarity measures between sequences | p. 234 |
| Information-theoretic measures | p. 237 |
| Dissimilarity measures between sets | p. 238 |
| Dissimilarity measures in applications | p. 242 |
| Invariance and robustness | p. 242 |
| Example measures | p. 242 |
| Discussion and conclusions | p. 250 |
| Visualization | p. 255 |
| Multidimensional scaling | p. 257 |
| First examples | p. 259 |
| Linear and nonlinear methods: examples | p. 261 |
| Implementation | p. 267 |
| Other mappings | p. 268 |
| Examples: getting insight into the data | p. 274 |
| Tree models | p. 281 |
| Summary | p. 287 |
| Further data exploration | p. 289 |
| Clustering | p. 290 |
| Standard approaches | p. 290 |
| Clustering on dissimilarity representations | p. 295 |
| Clustering examples for dissimilarity representations | p. 303 |
| Intrinsic dimension | p. 309 |
| Sampling density | p. 319 |
| Proposed criteria | p. 320 |
| Experiments with the NIST digits | p. 325 |
| Summary | p. 331 |
| One-class classifiers | p. 333 |
| General issues | p. 336 |
| Construction of one-class classifiers | p. 337 |
| One-class classifiers in feature spaces | p. 341 |
| Domain descriptors for dissimilarity representations | p. 346 |
| Neighborhood-based OCCs | p. 348 |
| Generalized mean class descriptor | p. 350 |
| Linear programming dissimilarity data description | p. 353 |
| More issues on class descriptors | p. 359 |
| Experiments | p. 366 |
| Experiment I: Condition monitoring | p. 366 |
| Experiment II: Diseased mucosa in the oral cavity | p. 374 |
| Experiment III: Heart disease data | p. 377 |
| Conclusions | p. 379 |
| Classification | p. 383 |
| Proof of principle | p. 384 |
| NN rule vs alternative dissimilarity-based classifiers | p. 384 |
| Experiment I: square dissimilarity representations | p. 388 |
| Experiment II: the dissimilarity space approach | p. 389 |
| Discussion | p. 395 |
| Selection of the representation set: the dissimilarity space approach | p. 396 |
| Prototype selection methods | p. 398 |
| Experimental setup | p. 401 |
| Results and discussion | p. 404 |
| Conclusions | p. 416 |
| Selection of the representation set: the embedding approach | p. 417 |
| Prototype selection methods | p. 418 |
| Experiments and results | p. 421 |
| Conclusions | p. 428 |
| On corrections of dissimilarity measures | p. 428 |
| Going more Euclidean | p. 429 |
| Experimental setup | p. 430 |
| Results and conclusions | p. 432 |
| A few remarks on a simulated missing value problem | p. 439 |
| Existence of zero-error dissimilarity-based classifiers | p. 443 |
| Asymptotic separability of classes | p. 444 |
| Final discussion | p. 451 |
| Combining | p. 453 |
| Combining for one-class classification | p. 455 |
| Combining strategies | p. 456 |
| Data and experimental setup | p. 459 |
| Results and discussion | p. 462 |
| Summary and conclusions | p. 465 |
| Combining for standard two-class classification | p. 466 |
| Combining strategies | p. 466 |
| Experiments on the handwritten digit set | p. 468 |
| Results | p. 470 |
| Conclusions | p. 473 |
| Classifier projection space | p. 474 |
| Construction and the use of CPS | p. 475 |
| Summary | p. 483 |
| Representation review and recommendations | p. 485 |
| Representation review | p. 485 |
| Three generalization ways | p. 486 |
| Representation formation | p. 489 |
| Generalization capabilities | p. 492 |
| Practical considerations | p. 493 |
| Clustering | p. 495 |
| One-class classification | p. 496 |
| Classification | p. 497 |
| Conclusions and open problems | p. 503 |
| Summary and contributions | p. 505 |
| Extensions of dissimilarity representations | p. 508 |
| Open questions | p. 510 |
| On convex and concave functions | p. 515 |
| Linear algebra in vector spaces | p. 519 |
| Some facts on matrices in a Euclidean space | p. 519 |
| Some facts on matrices in a pseudo-Euclidean space | p. 523 |
| Measure and probability | p. 527 |
| Statistical sidelines | p. 533 |
| Likelihood and parameter estimation | p. 533 |
| Expectation-maximization (EM) algorithm | p. 535 |
| Model selection | p. 536 |
| PCA and probabilistic models | p. 538 |
| Gaussian model | p. 538 |
| A Gaussian mixture model | p. 539 |
| PCA | p. 541 |
| Probabilistic PCA | p. 542 |
| A mixture of probabilistic PCA | p. 543 |
| Data sets | p. 545 |
| Artificial data sets | p. 545 |
| Real-world data sets | p. 549 |
| Bibliography | p. 561 |
| Index | p. 599 |
| Table of Contents provided by Ingram. All Rights Reserved. |
ISBN: 9789812565303
ISBN-10: 9812565302
Series: Machine Perception and Artificial Intelligence
Published: 7th December 2005
Format: Hardcover
Number of Pages: 636
Audience: Professional and Scholarly
Publisher: World Scientific Publishing Co Pte Ltd
Country of Publication: GB
Dimensions (cm): 15.5 x 23.6 x 3.6
Weight (kg): 1.04
Shipping
| Standard Shipping | Express Shipping | |
|---|---|---|
| Metro postcodes: | $9.99 | $14.95 |
| Regional postcodes: | $9.99 | $14.95 |
| Rural postcodes: | $9.99 | $14.95 |
Orders over $79.00 qualify for free shipping.
How to return your order
At Booktopia, we offer hassle-free returns in accordance with our returns policy. If you wish to return an item, please get in touch with Booktopia Customer Care.
Additional postage charges may be applicable.
Defective items
If there is a problem with any of the items received for your order then the Booktopia Customer Care team is ready to assist you.
For more info please visit our Help Centre.
























