by. ylayron angel

Net 25 Interview with PCS president Dr. Lemuel Brana ()

ang galing ng prof ko.

(Source: youtube.com)

Tracks

by. ylayron angel

Net 25 Interview with PCS president Dr. Lemuel Brana ()

ang galing ng prof ko.

(Source: youtube.com)

**Linear programming** (**LP**, or **linear optimization**) is a mathematical method for determining a way to achieve the best outcome (such as maximum profit or lowest cost) in a given mathematical model for some list of requirements represented as linear relationships. Linear programming is a specific case of mathematical programming (mathematical optimization).

More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints. Its feasible region is a convex polyhedron, which is a set defined as the intersection of finitely many half spaces, each of which is defined by a linear inequality. Its objective function is a real-valued affine function defined on this polyhedron. A linear programming algorithm finds a point in the polyhedron where this function has the smallest (or largest) value if such point exists.

Linear programs are problems that can be expressed in canonical form:

where **x** represents the vector of variables (to be determined), **c** and **b** are vectors of (known) coefficients and *A* is a (known) matrix of coefficients. The expression to be maximized or minimized is called the *objective function* (**c**^{T}**x** in this case). The equations *A***x** ≤ **b** are the constraints which specify a convex polytope over which the objective function is to be optimized. (In this context, two vectors are comparable when every entry in one is less-than or equal-to the corresponding entry in the other. Otherwise, they are incomparable.)

Linear programming can be applied to various fields of study. It is used most extensively in business and economics, but can also be utilized for some engineering problems. Industries that use linear programming models include transportation, energy, telecommunications, and manufacturing. It has proved useful in modeling diverse types of problems in planning, routing, scheduling, assignment, and design.

History

The problem of solving a system of linear inequalities dates back at least as far as Fourier, after whom the method of Fourier-Motzkin elimination is named. The earliest linear programming was first developed by Leonid Kantorovich in 1939.^{[1]} It was used during World War II to plan expenditures and returns in order to reduce costs to the army and increase losses to the enemy. The three founding figures in the subject are considered to be Leonid Kantorovich, who developed the earliest linear programming problems in 1939, George Dantzig, who published the simplex method in 1947, and John von Neumann, who developed the theory of the duality in the same year. The method was kept secret until 1947 when George B. Dantzig published the simplex method and John von Neumann developed the theory of duality as a linear optimization solution, and applied it in the field of game theory. Postwar, many industries found its use in their daily planning.

The linear-programming problem was first shown to be solvable in polynomial time by Leonid Khachiyan in 1979, but a larger theoretical and practical breakthrough in the field came in 1984 when Narendra Karmarkar introduced a new interior-point method for solving linear-programming problems.

Dantzig’s original example of finding the best assignment of 70 people to 70 jobs exemplifies the usefulness of linear programming. The computing power required to test all the permutations to select the best assignment is vast; the number of possible configurations exceeds the number of particles in the universe. However, it takes only a moment to find the optimum solution by posing the problem as a linear program and applying the Simplex algorithm. The theory behind linear programming drastically reduces the number of possible optimal solutions that must be checked.

[edit]Uses

Linear programming is a considerable field of optimization for several reasons. Many practical problems in operations research can be expressed as linear programming problems. Certain special cases of linear programming, such as *network flow* problems and *multicommodity flow* problems are considered important enough to have generated much research on specialized algorithms for their solution. A number of algorithms for other types of optimization problems work by solving LP problems as sub-problems. Historically, ideas from linear programming have inspired many of the central concepts of optimization theory, such as *duality,* *decomposition,* and the importance of *convexity* and its generalizations. Likewise, linear programming is heavily used in microeconomics and company management, such as planning, production, transportation, technology and other issues. Although the modern management issues are ever-changing, most companies would like to maximize profits or minimize costs with limited resources. Therefore, many issues can be characterized as linear programming problems.

[edit]Standard form

*Standard form* is the usual and most intuitive form of describing a linear programming problem. It consists of the following four parts:

§ A **linear function to be maximized**

e.g.

§ **Problem constraints** of the following form

e.g.

§ **Non-negative variables**

e.g.

§ **Non-negative right hand side constants**

The problem is usually expressed in *matrix** **form*, and then becomes:

Other forms, such as minimization problems, problems with constraints on alternative forms, as well as problems involving negative variables can always be rewritten into an equivalent problem in standard form.

[edit]**Example**

Suppose that a farmer has a piece of farm land, say *L* km^{2}, to be planted with either wheat or barley or some combination of the two. The farmer has a limited amount of fertilizer, *F* kilograms, and insecticide, *P* kilograms. Every square kilometer of wheat requires *F*_{1} kilograms of fertilizer, and *P*_{1} kilograms of insecticide, while every square kilometer of barley requires *F*_{2} kilograms of fertilizer, and*P*_{2} kilograms of insecticide. Let S_{1} be the selling price of wheat per square kilometer, and S_{2} be the price of barley. If we denote the area of land planted with wheat and barley by *x*_{1} and *x*_{2} respectively, then profit can be maximized by choosing optimal values for *x*_{1} and *x*_{2}. This problem can be expressed with the following linear programming problem in the standard form:

Maximize: *S*_{1}*x*_{1} + *S*_{2}*x*_{2}

(maximize the revenue—revenue is the “objective function”)

Subject to:

0 ≤ *x*_{1} + *x*_{2} ≤ *L*

(limit on total area)

0 ≤ *F*_{1}*x*_{1} + *F*_{2}*x*_{2} ≤ *F*

(limit on fertilizer)

0 ≤ *P*_{1}*x*_{1} + *P*_{2}*x*_{2} ≤ *P*

(limit on insecticide)

*x*_{1} ≥ 0, *x*_{2} ≥ 0

(cannot plant a negative area).

Which in matrix form becomes:

maximize

subject to

[edit]Augmented form (slack form)

Linear programming problems must be converted into *augmented form* before being solved by the simplex algorithm. This form introduces non-negative *slack variables* to replace inequalities with equalities in the constraints. The problem can then be written in the following block matrix form:

Maximize *Z*:

**x**, **x**_{s} ≥ 0

where **x**_{s} are the newly introduced slack variables, and *Z* is the variable to be maximized.

[edit]**Example**

The example above is converted into the following augmented form:

Maximize: *S*_{1}**x**_{1} + *S*_{2}**x**_{2}

(objective function)

Subject to:

**x**_{1} + **x**_{2} + **x**_{3} = *L*

(augmented constraint)

*F*_{1}**x**_{1} + *F*_{2}**x**_{2} + **x**_{4} = *F*

(augmented constraint)

*P*_{1}**x**_{1} + *P*_{2}**x**_{2} + **x**_{5} = *P*

(augmented constraint)

**x**_{1}, **x**_{2}, **x**_{3}, **x**_{4}, **x**_{5} ≥ 0.

where **x**_{3}, **x**_{4}, **x**_{5} are (non-negative) slack variables, representing in this example the unused area, the amount of unused fertilizer, and the amount of unused insecticide.

In matrix form this becomes:

Maximize *Z*:

[edit]Duality

*See also:** **Dual linear program*

Every linear programming problem, referred to as a *primal* problem, can be converted into a dual problem, which provides an upper bound to the optimal value of the primal problem. In matrix form, Minimize **b**^{T}**y** subject to *A*^{T}**y** = **c**, **y** ≥ 0.

we can express the *primal* problem as:

Maximize **c**^{T}**x** subject to *A***x** ≤ **b**, **x** ≥ 0;

with the corresponding **symmetric** dual problem,

Minimize **b**^{T}**y** subject to *A*^{T}**y** ≥ **c**, **y** ≥ 0.

An alternative primal formulation is:

Maximize **c**^{T}**x** subject to *A***x** ≤ **b**;

with the corresponding **asymmetric** dual problem,