Optimization is a mathematical technique to find the values in variables to minimize or maximize an objective function. The variables, most probably respond to constraints.
Mathematical optimization (alternatively spelt optimisation) or mathematical programming is the selection of a best element (with regard to some criterion) from some set of available alternatives. Optimization problems of sorts arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries. Wikipedia
Usually optimization problems are composed by 3 elements.
Minimize or maximize the objective function is the main goal in the optimization process. Most optimization problems have only 1 objective function. Nevertheless, some problems may not need an objective function, for example, a model that only needs to satisfy the values in variables. In this particular case nothing is optimized, so there is no reason to define an objective function. These kinds of problems are usually called feasibility problems.
In the other hand, we have problems with multiple objective functions. For this case it's necessary to optimize different objective functions at the same time. For example, a structural optimization problem. It's necessary minimize the weight of the structure, but at the same time, maximize the strength. However, different objective functions in the same problem may not fit, for example, the optimum for function A is in the opposite of the optimal for function B. Sometimes it's possible to rethink a multi-objective problem as a single objective problem. This can be done by the combination off the functions through the use of weights, for example, or by replacing targets through constraints.
Variables or Parameters
The group of variables or parameters in the objective function that affect the performance of the model in relation to the problem to solve. Variables are essential because they define the objective function and the problem constraints.
The constraints tells to variables what range of values and types to take. In other words, constraints "draw" the field of play for variables. The constraints are not essential. In fact, unconstrained optimizations problems is very large field and involve many different type of problems.
Optimization problem types.
Optimization problems types can be divided into 3 categories.
All variables can take any value within a range, usually are real numbers. We can distinguish 2 types of continuous optimization:
This method consist in satisfy the objective function on unrestricted variable values. This means the variables can take any range of numbers, and this implies millions of millions of results. For these kind of problems, the optimization algorithm to implement depends on the nature of the function to optimize, the variable types and the amount of it.
There are several constrained optimization problem types:
|Nonlinear constrained optimization||The general optimization problem is to minimize a nonlinear function under nonlinear constraints.|
|Bound-constrained optimization||Variables describe physical quantities and can be constrained to a specific range.|
|Quadratic programming||The problem involves the minimization of a quadratic equation subject to linear constraints.|
|Linear programming||The problem is to minimize a linear objective function of continuous variables, subject to linear constraints.|
|Semidefinite programming||A common linear problem, where the constraint is replaced by a semi-definitive constraint.|
|Stochastic programming||This method is based in future events, the variables are defined by random values, following constraints.|
|Network programming problems||Applications that can be represented as the flow of a energy in a network. They can be linear or nonlinear.|
Discrete optimization,implies variables with integer numbers. The 2 types of discrete optimization are:
|Programming integers||In many applications, such as the famous Travelling Sales Problem, the solution for the optimization problem makes sense only if variables are defined by integer values.|
|Stochastic programming||It's possible to make the assumption the data in the problem is accurate. However, in many problems the data might not be well defined. For example, errors in the measurement, data representation segments, the data is about the future and the bias is big. The idea fundamental in stochastic programming is the concept of a recourse. Recourse is the ability to take the right decision after a random event has been executed. Therefore, the data might be known with some precision or, in the case of future events, could be random.|
Consists in the simultaneous optimization of different objectives. Usually these kind of problems are common in design. In most multi-objective problems it's very unlikely different targets will be reached by the same chosen parameters. Therefore, a certain balance between the criteria is needed to ensure satisfactory results.
- Dos Passos, Waldemar.Numerical methods, algorithms, and tools in C#, Taylor & Francis Group, 2010.