At a Glance
688 Pages
Revised
25.4 x 17.78 x 3.66
Hardcover
$127.18
or 4 interest-free payments of $31.80 with
 orÂAims to ship in 7 to 10 business days
Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization. It responds to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems.
For this new edition the book has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are used widely in practice and the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience. It can be used as a graduate text in engineering, operations research, mathematics, computer science, and business. It also serves as a handbook for researchers and practitioners in the field. The authors have strived to produce a text that is pleasant to read, informative, and rigorous - one that reveals both the beautiful nature of the discipline and its practical side.
There is a selected solutions manual for instructors for the new edition.
Industry Reviews
Preface | p. xvii |
Preface to the Second Edition | p. xxi |
Introduction | p. 1 |
Mathematical Formulation | p. 2 |
Example: A Transportation Problem | p. 4 |
Continuous versus Discrete Optimization | p. 5 |
Constrained and Unconstrained Optimization | p. 6 |
Global and Local Optimization | p. 6 |
Stochastic and Deterministic Optimization | p. 7 |
Convexity | p. 7 |
Optimization Algorithms | p. 8 |
Notes and References | p. 9 |
Fundamentals of Unconstrained Optimization | p. 10 |
What Is a Solution? | p. 12 |
Recognizing a Local Minimum | p. 14 |
Nonsmooth Problems | p. 17 |
Overview of Algorithms | p. 18 |
Two Strategies: Line Search and Trust Region | p. 19 |
Search Directions for Line Search Methods | p. 20 |
Models for Trust-Region Methods | p. 25 |
Scaling | p. 26 |
Exercises | p. 27 |
Line Search Methods | p. 30 |
Step Length | p. 31 |
The Wolfe Conditions | p. 33 |
The Goldstein Conditions | p. 36 |
Sufficient Decrease and Backtracking | p. 37 |
Convergence of Line Search Methods | p. 37 |
Rate of Convergence | p. 41 |
Convergence Rate of Steepest Descent | p. 42 |
Newton's Method | p. 44 |
Quasi-Newton Methods | p. 46 |
Newton's Method with Hessian Modification | p. 48 |
Eigenvalue Modification | p. 49 |
Adding a Multiple of the Identity | p. 51 |
Modified Cholesky Factorization | p. 52 |
Modified Symmetric Indefinite Factorization | p. 54 |
Step-Length Selection Algorithms | p. 56 |
Interpolation | p. 57 |
Initial Step Length | p. 59 |
A Line Search Algorithm for the Wolfe Conditions | p. 60 |
Notes and References | p. 62 |
Exercises | p. 63 |
Trust-Region Methods | p. 66 |
Outline of the Trust-Region Approach | p. 68 |
Algorithms Based on the Cauchy Point | p. 71 |
The Cauchy Point | p. 71 |
Improving on the Cauchy Point | p. 73 |
The Dogleg Method | p. 73 |
Two-Dimensional Subspace Minimization | p. 76 |
Global Convergence | p. 77 |
Reduction Obtained by the Cauchy Point | p. 77 |
Convergence to Stationary Points | p. 79 |
Iterative Solution of the Subproblem | p. 83 |
The Hard Case | p. 87 |
Proof of Theorem 4.1 | p. 89 |
Convergence of Algorithms Based on Nearly Exact Solutions | p. 91 |
Local Convergence of Trust-Region Newton Methods | p. 92 |
Other Enhancements | p. 95 |
Scaling | p. 95 |
Trust Regions in Other Norms | p. 97 |
Notes and References | p. 98 |
Exercises | p. 98 |
Conjugate Gradient Methods | p. 101 |
The Linear Conjugate Gradient Method | p. 102 |
Conjugate Direction Methods | p. 102 |
Basic Properties of the Conjugate Gradient Method | p. 107 |
A Practical Form of the Conjugate Gradient Method | p. 111 |
Rate of Convergence | p. 112 |
Preconditioning | p. 118 |
Practical Preconditioners | p. 120 |
Nonlinear Conjugate Gradient Methods | p. 121 |
The Fletcher-Reeves Method | p. 121 |
The Polak-Ribiere Method and Variants | p. 122 |
Quadratic Termination and Restarts | p. 124 |
Behavior of the Fletcher-Reeves Method | p. 125 |
Global Convergence | p. 127 |
Numerical Performance | p. 131 |
Notes and References | p. 132 |
Exercises | p. 133 |
Quasi-Newton Methods | p. 135 |
The BFGS Method | p. 136 |
Properties of the BFGS Method | p. 141 |
Implementation | p. 142 |
The SR1 Method | p. 144 |
Properties of SR1 Updating | p. 147 |
The Broyden Class | p. 149 |
Convergence Analysis | p. 153 |
Global Convergence of the BFGS Method | p. 153 |
Superlinear Convergence of the BFGS Method | p. 156 |
Convergence Analysis of the SR1 Method | p. 160 |
Notes and References | p. 161 |
Exercises | p. 162 |
Large-Scale Unconstrained Optimization | p. 164 |
Inexact Newton Methods | p. 165 |
Local Convergence of Inexact Newton Methods | p. 166 |
Line Search Newton-CG Method | p. 168 |
Trust-Region Newton-CG Method | p. 170 |
Preconditioning the Trust-Region Newton-CG Method | p. 174 |
Trust-Region Newton-Lanczos Method | p. 175 |
Limited-Memory Quasi-Newton Methods | p. 176 |
Limited-Memory BFGS | p. 177 |
Relationship with Conjugate Gradient Methods | p. 180 |
General Limited-Memory Updating | p. 181 |
Compact Representation of BFGS Updating | p. 181 |
Unrolling the Update | p. 184 |
Sparse Quasi-Newton Updates | p. 185 |
Algorithms for Partially Separable Functions | p. 186 |
Perspectives and Software | p. 189 |
Notes and References | p. 190 |
Exercises | p. 191 |
Calculating Derivatives | p. 193 |
Finite-Difference Derivative Approximations | p. 194 |
Approximating the Gradient | p. 195 |
Approximating a Sparse Jacobian | p. 197 |
Approximating the Hessian | p. 201 |
Approximating a Sparse Hessian | p. 202 |
Automatic Differentiation | p. 204 |
An Example | p. 205 |
The Forward Mode | p. 206 |
The Reverse Mode | p. 207 |
Vector Functions and Partial Separability | p. 210 |
Calculating Jacobians of Vector Functions | p. 212 |
Calculating Hessians: Forward Mode | p. 213 |
Calculating Hessians: Reverse Mode | p. 215 |
Current Limitations | p. 216 |
Notes and References | p. 217 |
Exercises | p. 217 |
Derivative-Free Optimization | p. 220 |
Finite Differences and Noise | p. 221 |
Model-Based Methods | p. 223 |
Interpolation and Polynomial Bases | p. 226 |
Updating the Interpolation Set | p. 227 |
A Method Based on Minimum-Change Updating | p. 228 |
Coordinate and Pattern-Search Methods | p. 229 |
Coordinate Search Method | p. 230 |
Pattern-Search Methods | p. 231 |
A Conjugate-Direction Method | p. 234 |
Nelder-Mead Method | p. 238 |
Implicit Filtering | p. 240 |
Notes and References | p. 242 |
Exercises | p. 242 |
Least-Squares Problems | p. 245 |
Background | p. 247 |
Linear Least-Squares Problems | p. 250 |
Algorithms for Nonlinear Least-Squares Problems | p. 254 |
The Gauss-Newton Method | p. 254 |
Convergence of the Gauss-Newton Method | p. 255 |
The Levenberg-Marquardt Method | p. 258 |
Implementation of the Levenberg-Marquardt Method | p. 259 |
Convergence of the Levenberg-Marquardt Method | p. 261 |
Methods for Large-Residual Problems | p. 262 |
Orthogonal Distance Regression | p. 265 |
Notes and References | p. 267 |
Exercises | p. 269 |
Nonlinear Equations | p. 270 |
Local Algorithms | p. 274 |
Newton's Method for Nonlinear Equations | p. 274 |
Inexact Newton Methods | p. 277 |
Broyden's Method | p. 279 |
Tensor Methods | p. 283 |
Practical Methods | p. 285 |
Merit Functions | p. 285 |
Line Search Methods | p. 287 |
Trust-Region Methods | p. 290 |
Continuation/Homotopy Methods | p. 296 |
Motivation | p. 296 |
Practical Continuation Methods | p. 297 |
Notes and References | p. 302 |
Exercises | p. 302 |
Theory of Constrained Optimization | p. 304 |
Local and Global Solutions | p. 305 |
Smoothness | p. 306 |
Examples | p. 307 |
A Single Equality Constraint | p. 308 |
A Single Inequality Constraint | p. 310 |
Two Inequality Constraints | p. 313 |
Tangent Cone and Constraint Qualifications | p. 315 |
First-Order Optimality Conditions | p. 320 |
First-Order Optimality Conditions: Proof | p. 323 |
Relating the Tangent Cone and the First-Order Feasible Direction Set | p. 323 |
A Fundamental Necessary Condition | p. 325 |
Farkas' Lemma | p. 326 |
Proof of Theorem 12.1 | p. 329 |
Second-Order Conditions | p. 330 |
Second-Order Conditions and Projected Hessians | p. 337 |
Other Constraint Qualifications | p. 338 |
A Geometric Viewpoint | p. 340 |
Lagrange Multipliers and Sensitivity | p. 341 |
Duality | p. 343 |
Notes and References | p. 349 |
Exercises | p. 351 |
Linear Programming: The Simplex Method | p. 355 |
Linear Programming | p. 356 |
Optimality and Duality | p. 358 |
Optimality Conditions | p. 358 |
The Dual Problem | p. 359 |
Geometry of the Feasible Set | p. 362 |
Bases and Basic Feasible Points | p. 362 |
Vertices of the Feasible Polytope | p. 365 |
The Simplex Method | p. 366 |
Outline | p. 366 |
A Single Step of the Method | p. 370 |
Linear Algebra in the Simplex Method | p. 372 |
Other Important Details | p. 375 |
Pricing and Selection of the Entering Index | p. 375 |
Starting the Simplex Method | p. 378 |
Degenerate Steps and Cycling | p. 381 |
The Dual Simplex Method | p. 382 |
Presolving | p. 385 |
Where Does the Simplex Method Fit? | p. 388 |
Notes and References | p. 389 |
Exercises | p. 389 |
Linear Programming: Interior-Point Methods | p. 392 |
Primal-Dual Methods | p. 393 |
Outline | p. 393 |
The Central Path | p. 397 |
Central Path Neighborhoods and Path-Following Methods | p. 399 |
Practical Primal-Dual Algorithms | p. 407 |
Corrector and Centering Steps | p. 407 |
Step Lengths | p. 409 |
Starting Point | p. 410 |
A Practical Algorithm | p. 411 |
Solving the Linear Systems | p. 411 |
Other Primal-Dual Algorithms and Extensions | p. 413 |
Other Path-Following Methods | p. 413 |
Potential-Reduction Methods | p. 414 |
Extensions | p. 415 |
Perspectives and Software | p. 416 |
Notes and References | p. 417 |
Exercises | p. 418 |
Fundamentals of Algorithms for Nonlinear Constrained Optimization | p. 421 |
Categorizing Optimization Algorithms | p. 422 |
The Combinatorial Difficulty of Inequality-Constrained Problems | p. 424 |
Elimination of Variables | p. 426 |
Simple Elimination using Linear Constraints | p. 428 |
General Reduction Strategies for Linear Constraints | p. 431 |
Effect of Inequality Constraints | p. 434 |
Merit Functions and Filters | p. 435 |
Merit Functions | p. 435 |
Filters | p. 437 |
The Maratos Effect | p. 440 |
Second-Order Correction and Nonmonotone Techniques | p. 443 |
Nonmonotone (Watchdog) Strategy | p. 444 |
Notes and References | p. 446 |
Exercises | p. 446 |
Quadratic Programming | p. 448 |
Equality-Constrained Quadratic Programs | p. 451 |
Properties of Equality-Constrained QPs | p. 451 |
Direct Solution of the KKT System | p. 454 |
Factoring the Full KKT System | p. 454 |
Schur-Complement Method | p. 455 |
Null-Space Method | p. 457 |
Iterative Solution of the KKT System | p. 459 |
CG Applied to the Reduced System | p. 459 |
The Projected CG Method | p. 461 |
Inequality-Constrained Problems | p. 463 |
Optimality Conditions for Inequality-Constrained Problems | p. 464 |
Degeneracy | p. 465 |
Active-Set Methods for Convex QPs | p. 467 |
Specification of the Active-Set Method for Convex QP | p. 472 |
Further Remarks on the Active-Set Method | p. 476 |
Finite Termination of Active-Set Algorithm on Strictly Convex QPs | p. 477 |
Updating Factorizations | p. 478 |
Interior-Point Methods | p. 480 |
Solving the Primal-Dual System | p. 482 |
Step Length Selection | p. 483 |
A Practical Primal-Dual Method | p. 484 |
The Gradient Projection Method | p. 485 |
Cauchy Point Computation | p. 486 |
Subspace Minimization | p. 488 |
Perspectives and Software | p. 490 |
Notes and References | p. 492 |
Exercises | p. 492 |
Penalty and Augmented Lagrangian Methods | p. 497 |
The Quadratic Penalty Method | p. 498 |
Motivation | p. 498 |
Algorithmic Framework | p. 501 |
Convergence of the Quadratic Penalty Method | p. 502 |
Ill Conditioning and Reformulations | p. 505 |
Nonsmooth Penalty Functions | p. 507 |
A Practical l[subscript 1] Penalty Method | p. 511 |
A General Class of Nonsmooth Penalty Methods | p. 513 |
Augmented Lagrangian Method: Equality Constraints | p. 514 |
Motivation and Algorithmic Framework | p. 514 |
Properties of the Augmented Lagrangian | p. 517 |
Practical Augmented Lagrangian Methods | p. 519 |
Bound-Constrained Formulation | p. 519 |
Linearly Constrained Formulation | p. 522 |
Unconstrained Formulation | p. 523 |
Perspectives and Software | p. 525 |
Notes and References | p. 526 |
Exercises | p. 527 |
Sequential Quadratic Programming | p. 529 |
Local SQP Method | p. 530 |
SQP Framework | p. 531 |
Inequality Constraints | p. 532 |
Preview of Practical SQP Methods | p. 533 |
IQP and EQP | p. 533 |
Enforcing Convergence | p. 534 |
Algorithmic Development | p. 535 |
Handling Inconsistent Linearizations | p. 535 |
Full Quasi-Newton Approximations | p. 536 |
Reduced-Hessian Quasi-Newton Approximations | p. 538 |
Merit Functions | p. 540 |
Second-Order Correction | p. 543 |
A Practical Line Search SQP Method | p. 545 |
Trust-Region SQP Methods | p. 546 |
A Relaxation Method for Equality-Constrained Optimization | p. 547 |
Sl[subscript 1] QP (Sequential l[subscript 1] Quadratic Programming) | p. 549 |
Sequential Linear-Quadratic Programming (SLQP) | p. 551 |
A Technique for Updating the Penalty Parameter | p. 553 |
Nonlinear Gradient Projection | p. 554 |
Convergence Analysis | p. 556 |
Rate of Convergence | p. 557 |
Perspectives and Software | p. 560 |
Notes and References | p. 561 |
Exercises | p. 561 |
Interior-Point Methods for Nonlinear Programming | p. 563 |
Two Interpretations | p. 564 |
A Basic Interior-Point Algorithm | p. 566 |
Algorithmic Development | p. 569 |
Primal vs. Primal-Dual System | p. 570 |
Solving the Primal-Dual System | p. 570 |
Updating the Barrier Parameter | p. 572 |
Handling Nonconvexity and Singularity | p. 573 |
Step Acceptance: Merit Functions and Filters | p. 575 |
Quasi-Newton Approximations | p. 575 |
Feasible Interior-Point Methods | p. 576 |
A Line Search Interior-Point Method | p. 577 |
A Trust-Region Interior-Point Method | p. 578 |
An Algorithm for Solving the Barrier Problem | p. 578 |
Step Computation | p. 580 |
Lagrange Multipliers Estimates and Step Acceptance | p. 581 |
Description of a Trust-Region Interior-Point Method | p. 582 |
The Primal Log-Barrier Method | p. 583 |
Global Convergence Properties | p. 587 |
Failure of the Line Search Approach | p. 587 |
Modified Line Search Methods | p. 589 |
Global Convergence of the Trust-Region Approach | p. 589 |
Superlinear Convergence | p. 591 |
Perspectives and Software | p. 592 |
Notes and References | p. 593 |
Exercises | p. 594 |
Background Material | p. 598 |
Elements of Linear Algebra | p. 598 |
Vectors and Matrices | p. 598 |
Norms | p. 600 |
Subspaces | p. 602 |
Eigenvalues, Eigenvectors, and the Singular-Value Decomposition | p. 603 |
Determinant and Trace | p. 605 |
Matrix Factorizations: Cholesky, LU, QR | p. 606 |
Symmetric Indefinite Factorization | p. 610 |
Sherman-Morrison-Woodbury Formula | p. 612 |
Interlacing Eigenvalue Theorem | p. 613 |
Error Analysis and Floating-Point Arithmetic | p. 613 |
Conditioning and Stability | p. 616 |
Elements of Analysis, Geometry, Topology | p. 617 |
Sequences | p. 617 |
Rates of Convergence | p. 619 |
Topology of the Euclidean Space R[superscript n] | p. 620 |
Convex Sets in R[superscript n] | p. 621 |
Continuity and Limits | p. 623 |
Derivatives | p. 625 |
Directional Derivatives | p. 628 |
Mean Value Theorem | p. 629 |
Implicit Function Theorem | p. 630 |
Order Notation | p. 631 |
Root-Finding for Scalar Equations | p. 633 |
A Regularization Procedure | p. 635 |
References | p. 637 |
Index | p. 653 |
Table of Contents provided by Ingram. All Rights Reserved. |
ISBN: 9780387303031
ISBN-10: 0387303030
Series: Springer Series in Operations Research and Financial Engineering
Published: 1st July 2006
Format: Hardcover
Language: English
Number of Pages: 688
Audience: College, Tertiary and University
Publisher: Springer Nature B.V.
Country of Publication: US
Edition Number: 2
Edition Type: Revised
Dimensions (cm): 25.4 x 17.78 x 3.66
Weight (kg): 1.39
Shipping
Standard Shipping | Express Shipping | |
---|---|---|
Metro postcodes: | $9.99 | $14.95 |
Regional postcodes: | $9.99 | $14.95 |
Rural postcodes: | $9.99 | $14.95 |
How to return your order
At Booktopia, we offer hassle-free returns in accordance with our returns policy. If you wish to return an item, please get in touch with Booktopia Customer Care.
Additional postage charges may be applicable.
Defective items
If there is a problem with any of the items received for your order then the Booktopia Customer Care team is ready to assist you.
For more info please visit our Help Centre.