Program 2018

November 15, 8:30-09:00 – Registration
November 15, 9:00-17:45 – Scientific Program
November 15, 18:15- Drink-Buffet

Scientific Program (amphitheater located in building 34):
Last update: November 18, 2018

09:00 – 09:10: Opening: Xavier Gandibleux, Claude Jard

09:10 – 10:00: Keynote I
Chair: Stefan Ruzika (Technischen Universität Kaiserslautern, Germany) 

Vincent T’Kindt (Université de Tours, France). Quantifying the hardness of the enumeration of Pareto optima: a theoretical framework with application to scheduling problems. (slides)

 Enumerating Pareto optima is an issue that has been considered for numerous multiobjective optimization problems and which rises hard computational and theoretical challenges. One of them asks the question of the complexity price that has to be paid to get the entire Pareto set. Complexity theory provided, quite recently, complexity classes for qualifying this price. However, for a given multiobjective optimization problem, establishing its complexity status does not provide much information than “it is easy” or “it is hard” to be solved.
In this talk we will focus on recent theoretical developments that intend to establish some quantification, in the worst-case, of the complexity of enumerating the set of Pareto optima when this cannot be done in polynomial time. This follows early works, in the field of Computer Science, on the proposal of “Exponential Time Algorithms” (ETA) for single criterion NP-hard optimization problems. Such an algorithm has calculable worst-case time and space complexities which are as low as possible (and always better than brute-force algorithms). Then, such complexities give an hint on the tractability of an NP-hard problem. For example, it is now known that the maximum independent set problem in graphs can be solved in O*(1.1996^n) time in the worst-case with n the number of vertices.
Up to now, no work has addressed this issue for multiobjective optimization problems. We will remedy to that by introducing the basic notions related to ETA and show the existence of such algorithms for some multiobjective hard scheduling problems.

10:00 – 10:10 Break

10:10 – 11:25 Session 1: Algorithms
Chair: Kathrin Klamroth (Bergische Universität Wuppertal, Germany)

10:10 – 10:35: Sune Lauth Gadegaard (Aarhus Universitet, Denmark). Finding all efficient solution to a bi-objective combinatorial optimization problem. (slides)(Abstract)

In many real world applications a variety of possible actions for obtaining the same outcome is desirable, as this will equip a decision maker with more room to maneuver. In this talk we propose a method for finding not only all non-dominated outcomes, but all Pareto optimal solutions to combinatorial optimization problems. The methodology developed is a decision space branch-and-bound algorithm based on bound sets.

10:35 – 11:00: Stefan Ruzika (Technischen Universität Kaiserslautern, Germany). Can we approximate a weight set decomposition? (slides) Joint work with Pascal Halffmann (Technischen Universität Kaiserslautern).

The Weighted Sum Method is a well-known and commonly used scalarization technique not only for solving multi-objective optimization problems but also for quantifying the preferences of a decision maker. In order to analyse either these preferences or the set of so-called (extreme) supported nondominated images, the Weight Set Decomposition can come into play: This tool decomposes the set of all normalized positive weights into components such that for each component one extreme supported nondominated image is optimal for all weights in this component. Hence, one can deduce a „quality”-measure for the chosen solution or the chosen weights. However, computing the whole weight set decomposition can be time consuming or not necessary.
In this talk, we introduce the idea of an approximation for computing the weight set decomposition. To the best of our knowledge, this has not been done before, hence we lay the foundation for further development in this research.
First, we state what we can understand as an approximate weight set decomposition and we list requirements and conditions from which we construct a proper definition. Further, we present various measures in order to state the quality of the approximation such as an approximation factor. Last, we present various approaches of an approximation algorithm that decomposes the weight set. We examine this algorithm regarding correctness, approximation quality, convergence rate, running time and bounds and give some illustrative examples.

11:00 – 11:25: Luís Paquete (University of Coimbra, Portugal). Some results for the Hypervolume Subset Selection Problem. Joint work with Miguel M. Duarte (University of Coimbra), José R. Figueira (University of Lisbon), Carlos M. Fonseca (University of Coimbra), Ricardo J. Gomes (University of Coimbra), Andreia P. Guerreiro (University of Coimbra), Tobias Kuhn (University of Kaiserslautern), Stefan Ruzika (University of Kaiserslautern).

The hypervolume subset selection problem arises within selection procedures of multiobjective evolutionary algorithms as well for extracting a representative subset of efficient solutions for multiobjective optimization problems. In this talk, we discuss of a number of recent results to this problem, in particular, an integer programming formulation and the development of several exact and approximation algorithms.

11:25 – 11:50 Coffee Break

11:50 – 12:40 Session 2: Representations
Chair: Matthias Ehrgott (Lancaster University, UK)

11:50 – 12:15: Martin Philip Kidd (Technical University of Denmark, Denmark). Equidistant representations: connecting coverage and uniformity in biobjective optimization. (slides) Joint work with Richard Martin Lusby and Jesper Larsen (Technical University of Denmark).

The nondominated frontier of a multiobjective optimization problem can be overwhelming to a decision maker, as it is often either very large or infinite in size. Instead, a discrete representation of this set in the form of a small sample of points is often preferred. In this paper we consider the Discrete Representation Problem (DRP), which is itself a triobjective optimization problem. The three objectives comprise three standard quality measures for discrete representations, namely coverage, uniformity and the cardinality of the set. We introduce the notion of complete equidistant representations, and prove that such a representation provides a nondominated solution to the DRP. In addition, we show through the help of complete equidistant representations that coverage and uniformity can be seen as dual problems given a fixed cardinality, and therefore that optimality gaps for coverage and uniformity can be obtained given any representation. Moreover, even though the definition of the coverage error requires the full nondominated set, we show how the coverage error for a given representation can be calculated by generating a much smaller set. Finally, we present a new method for finding discrete representations of a desired cardinality that outperforms existing methods w.r.t. coverage and uniformity on a set of mixed-integer programming benchmark instances.

12:15 – 12:40: Kerstin Daechert (Fraunhofer Institute for Industrial Mathematics ITWM, Kaiserslautern, Germany). Obtaining representations for continuous optimization problems. (slides)

 In this talk we present the generation of representations of continuous optimization problems with the help of different variants of an adaptive parametric algorithm. This algorithm relies on the successive solution of a scalarized problem whose parameters are varied in a systematic way. In particular, we consider a weighted Tchebycheff method as scalarization. In each iteration, either a new nondominated point is obtained or a certain part of the outcome space, which turns out to be empty, can be discarded from further consideration. We implement different variants of our approach and compute representations for certain test problems from the literature. The representations obtained by our adaptive approach are compared to an e-constraint scalarization with equidistant parameter choice. Our results show that less infeasible and/or redundant scalarized problems occur for adaptive methods, in general. Moreover, by construction, these methods adapt better to the shape of the nondominated set which is validated by certain quality measures.

12:40 – 14:00 Lunch Break

14:00 – 15:15 Session 3: Applications I
Chair: Lars R. Nielsen (University of Aarhus, Denmark)

14:00 – 14:25: Antoine Kerbérénès (Université Paris-Dauphine and Naval Group Research, France). Multi-Objective Combinatorial Optimization for Weakly Coupled Systems. Joint work with Daniel Vanderpooten (Université Paris Dauphine), and Jean-Michel Vanpeperstraete (Naval Group Reasearch).

We say that a system is weakly coupled (or interwoven) when it can usefully be seen as a collection of subsystems which are independent enough to be distinguished, but are still mildly linked together. Such a characterization is relevant when it suggests hierarchical decompositions and coordination schemes for the optimization problems associated with these systems, or when the mildness of the interweaving allows for relaxations or simplifications of the coupling constraints. These strategies have been studied in the continuous, single objective and multi-disciplinary contexts where different objectives relate to different subsystems. However, in a context where each of the sub-problems is multi-objective, decomposition is not straightforward. Furthermore, there still is no dedicated literature addressing the specificities of integrality in weakly coupled systems (when even the single objective versions of the sub-problems can be NP-hard). First we propose a formal definition of weakly coupled systems, general decomposition relaxation schemes, and we present difficulties with decomposition which are specific to the multi-objective case. Then we present a particular reference problem of this kind, as well as purported strategies for enumeration and representation or approximation of its Pareto set.

14:25 – 14:50: Onur Tanil Doganay (Bergische Universität Wuppertal, Germany). Gradient-Based Biobjective Shape Optimization of Ceramic Components: Probability of Failure versus Cost. (slides) Joint work with Johanna Schultes, Camilla Hahn, Hanno Gottschalk, Kathrin Klamroth, Michael Stiglmayr (University of Wuppertal).

 Single-objective shape optimization is widely used in a multitude of engineering applications. While sensitivities for cost and fluid dynamic objective functions have been used for a long time, this is not the case for objectives related to reliability since peak stress is not a differentiable function of the shape.
This changes, if deterministic reliability criteria are replaced by a probabilistic criterion, namely the probability of failure (PoF), which has been introduced recently to the field of ceramic design and low cycle fatigue.
Here we present a finite element based first discretize, then adjoin approach for the calculation of shape gradients (sensitivities) with regard to the PoF of ceramic designs. This is applied to the simultaneous minimization of the PoF and the volume of a 2D ceramic rod in a biobjective shape optimization problem.
We have implemented a biobjective descent algorithm and compare it with a classical weighted sum approach.

14:50 – 15:15: Nicolas Dupin (Université de Lille, France). Scheduling maintenances of nuclear power plants, from 2-stage robust programming to multi-objective optimization (slides). Joint work with El-Ghazali Talbi (Université de Lille).

Scheduling the maintenances of nuclear power plants is a complex industrial problem, formulated as a 2-stage stochastic problem for the EURO/ROADEF 2010 challenge with uncertainty on power demands and production costs. The first stage optimizes the maintenance dates and refueling decisions, whereas the second level concerns unit commitment problems to fulfill the power demands, ensuring feasibility and costs of the first stage decisions. Our extension considers uncertainty on the duration of the maintenance operations. 2-stage robust optimization is a useful framework for the industrial needs. This presentation highlights firstly the methodological and numerical difficulties to solve this 2-stage robust optimization. A lighter robustness model is adapted following the operational needs to face the previous issues. It leads to a bi-objective model computing Pareto front to find the best compromise solutions between cost and robustness. The epsilon-constraint method fits the operational needs here, and shows the existence of unsupported points in the bi-objective optimization cost/robustness. Furthermore, the methodology can be extended to maximize also the stability of the maintenance planning for the dynamic optimization process in operations considering monthly re-optimizations.

15:15 – 15:30 Break

15:30 – 16:20 Session 4: Applications II
Chair: Daniel Vanderpooten (Université Paris-Dauphine, France)

15:30 – 15:55: Nicolas Forget (Université de Nantes, France). On two speeding-up techniques for the computation of multi-objective shortest paths with a label setting algorithm. (slides) Joint work with Xavier Gandibleux, Didier Robbes (Université de Nantes) and Matthias Ehrgott (Lancaster University).

We consider the computation of all non dominated points for the multi-objective shortest path problem when the objectives are linear, and all the costs are positive. In this context, and based on the Martins’s algorithm, the principle of a bi-directional strategy has been already discussed in the literature, with the aim of reducing the number of labels to manage by the algorithm. This talk presents an overview of the contributions found in the literature, discusses about the efficiency of this strategy in relation with the instance to solve, and reports several observations gathered during this study.

15:55 – 16:20: Clemens Thielen (Technischen Universität Kaiserslautern, Germany). Duty Rostering for Physicians at a Department of Orthopedics and Trauma Surgery: Decision Support using Mathematical Optimization. (slides)

Duty rostering for nurses and physicians is an important task within personnel planning in hospitals and has a large impact both on the efficient operation of the hospital and on employee satisfaction. Creating duty rosters for physicians is particularly challenging due to complex constraints and a multitude of conflicting objectives that cannot be handled adequately when generating duty rosters manually. This motivates using mathematical optimization models in order to generate duty rosters for physicians that satisfy all arising constraints and at the same time allow an efficient operation of the department.
We present an integer programming model for the generation of duty rosters for physicians at a large department of orthopedics and trauma surgery that is used in practice since January 2016. Using real-world data from the practice partner, we evaluate the quality of the generated duty rosters with respect to several objective functions and additionally present ways of dealing with unpredictable disruptions (caused, e.g., by absence of physicians due to illness).

16:20 – 16:45 Coffee Break

16:45 – 17:35: Keynote II
Chair: Sophie Parragh (Johannes Kepler University-Linz, Austria)

Natashia Boland (Stewart School of Industrial & Systems Engineering, Georgia Institute of Technology, USA). Criterion Space Search Algorithms for Solving Multiobjective Mixed Integer Linear Programs.

 Multiobjective integer linear programming has been an explosive field of research in recent years, with new algorithms and new theoretical results, following different directions, being created at a rapid pace. This talk will focus on recent advances in criterion space search algorithms, which leverage the power of single objective integer linear programming solvers, using them as “black boxes”. It will also focus on the case that the integer program has some continuous variables: it is a mixed integer program. With two objectives, the nondominated frontier of a mixed integer program may include continuous piecewise linear sections, with either closed or open end points, as well as single, isolated, points such as occur in the pure integer case. The talk will review some of the recent criterion space search methods developed for biobjective mixed integer programs, including the first algorithm accompanied to be a complexity result, describing the (worst-case) number of single objective integer programs that need to be solved as a function of the number of line segments (or points) in the nondominated frontier. The pros and cons of alternative methods, and their computational performance, both of which depend on the characteristics of the instances, will be discussed.

17:35 – 17:45: Closing: Xavier Gandibleux, Stefan Ruzika