Summer 2001 | former semesters |
Time: Mondays, 14:15 - 15:45
Location: IFW A 32
(unless stated otherwise)
Organizers: CGC
faculty at ETH Zurich
Contact: Emo Welzl
The CGC colloquium does not take place April 16 (Easter Monday), April 23 (Sechseläuten), May 14 (CGC Workshop at Monte Verita), and June 4 (Whit Monday).
April 09, 2001
Shmuel Onn
Technion - Israel Institute of Technology, Haifa, Israel
Micha Sharir
Tel Aviv University, Israel (and ETH Zurich, Summer 2001)The Union of Geometric Objects and Generalized Voronoi Diagrams
Jürg Nievergelt
ETH Zurich
Martin Dietzfelbinger
University of Technology Ilmenau, Germany
Leonid Khachiyan
Rutgers University, New Jersey, U.S.A.
Heribert Vollmer
University Würzburg, GermanyA Polynomial-Time Approximation Scheme for Base Station Positioning in UMTS-Networks
Volker Diekert
University Stuttgart, Germany
Monday, April 09, 2001, 14:15 - 15:45, IFW A 32
The partition problem concerns the partitioning of a set of vectors in d-space into p parts so as to maximize an objective function which is convex on the sum of vectors in each part, subject to constraints on the shape of admissible partitions. The problem has broad expressive power and arises in a variety of areas ranging from economics to symbolic computation.The problem can be treated by maximizing the same objective over a suitable partition polytope in d x p matrix space, and its computational complexity is intimately related to the vertex complexity of such polytopes.
My talk will provide an introduction to the subject and overview our ongoing ISF project on vector partitions, drawing from several articles with various coauthors including Alon, Aviran, Hwang, Rothblum, Schulman and Sturmfels.
In particular, I will describe recent polynomial upper and lower bounds on the complexity of the problem with no shape restrictions, based on zonotopal refinements of partition polytopes, on the introduction of Momentopes, and on certain polynomial realizations of Davenport-Schinzel type sequences.
If time permits I will mention an application to symbolic computation - an efficient algorithm for constructing universal Gröbner bases of vanishing ideals of configurations by partition-realization of their state polytopes.
Monday, April 30, 2001, 14:15 - 15:45, IFW A 32
Micha Sharir
Tel Aviv University, Israel (and ETH Zurich, Summer 2001)
Let C be a collection of n objects in d-space, and let U denote the union of C. The combinatorial complexity of U is the number of faces, of all dimensions, of the arrangement of the boundary surfaces of the objects in C, that lie on the boundary of U (e.g., a vertex is an intersection of d boudaries). If each of the objects in C has `constant description complexity' then the complexity of U is O(n^{d}) and in general this bound is tight.We analyze the complexity of U in several special cases, where one can obtain a smaller complexity bound. There are many such instances in the 2-dimensional case (pseudo-disks, fat objects), but the problem is quite hard in 3 and more dimensions.
The cases that we consider include convex polyhedra in 3 dimensions, axis-parallel cubes in any dimension, infinite congruent cylinders in 3 dimensions, fat dihedral wedges and congruent (arbitrarily oriented) cubes in 3 dimensions, and more.
A special important case is the union of Minkowski sums, where the objects are of the form { A_{i} + B }_{i=1..n} , where the A_{i}'s are convex pairwise-disjoint objects and B is another convex object, and X + Y denotes the Minkowski sum of X and Y.
The boundary of such a union can be regarded as a cross-section or a `level-set' of the generalized Voronoi diagram with the A_{i}'s as sites and with the metric (or convex distance function) induced by B as a `unit ball': The union boundary is the locus of points whose B-distance from the nearest site is exactly 1.
We discuss a few cases where sharper bounds are known either for the complexity of the union of Minkowski sums, as above, or for the complexity of the associated Voronoi diagram (a much harder problem).
Monday, May 07, 2001, 14:15 - 15:45, IFW A 32
Jürg Nievergelt
ETH Zurich
For half a century since computers came into existence, the goal of finding elegant and efficient algorithms to solve "simple" (well-defined and well-structured) problems has dominated algorithm design. Over the same time period, both processing and storage capacity of computers have increased by roughly a factor of a million. The next few decades may well give us a similar rate of growth in raw computing power, due to various factors such as continuing miniaturization, parallel and distributed computing. If a quantitative change of orders of magnitude leads to qualitative advances, where will the latter take place? Only empirical research can answer this question.Asymptotic complexity theory has emerged as a surprisingly effective tool for predicting run times of polynomial-time algorithms. For NP-hard problems, on the other hand, it yields overly pessimistic bounds. It asserts the non-existence of algorithms that are efficient across an entire problem class, but ignores the fact that many instances, perhaps including those of interest, can be solved efficiently. For such cases we need a complexity measure that applies to problem instances, rather than to over-sized problem classes.
Combinatorial optimization and enumeration problems are modeled by state spaces that usually lack any regular structure. Exhaustive search is often the only way to handle such "combinatorial chaos". Several general purpose search algorithms are used under different circumstances. We describe reverse search and illustrate this technique on a case study of enumerative optimization: enumerating the k shortest Euclidean spanning trees.
(18-35 in "Sofsem 2000 - Theory and Practice of Informatics", V. Hlavac, K.G. Jeffery and J. Wiedermann (eds.), Springer LNCS Vol 1963, 2000)
Monday, May 28, 2001, 14:15 - 15:45, IFW A 32
Since it was introduced by Carter and Wegman in 1979, the general and elegant concept of a universal class of hash functions has found numerous applications in a wide variety of areas in Theoretical Computer Science, ranging from the construction of data structures over fundamentals of parallel and distributed computing all the way to computational complexity theory.The basic approach is as follows: Let U and M be fixed (finite) sets. Consider sets H of functions from U to M that can be evaluated efficiently and so that the values of a function chosen at random from H behave in some way similar to a random function from U to M.
The classical example is Carter and Wegman's concept "c-universality":
For x, y in U with x != y we have Prob_{h in H} (h(x) = h(y)) <= c / |M| for a constant c.Many other concepts formulating stronger requirements have since been introduced, in particular "k-wise independence", for any k >= 2:
For every choice of k distinct elements x_{1}, ..., x_{k} in U the random values h(x_{1}), ..., h(x_{k}) are independent and uniformly distributed in M.In the lecture, we will look at some interesting constructions of such classes, and at methods from probability theory, algebra, and combinatorics that allow us to establish positive and negative properties of these classes. The role of these properties in selected applications will be described, in particular in recent constructions of "perfect" hash functions, i.e., functions with a very simple structure that are one-to-one on a given set S in U with |S| = |M|.
Monday, June 11, 2001, 14:15 - 15:45, IFW A 32
An implicitly given Sperner hypergraph H is dual-bounded if the size of the dual hypergraph H^{d} can be bounded by a polynomial in the size and length of description of H. We discuss a general method for generating dual-bounded hypergraphs in incremental quasi-polynomial time. Our examples include multiple, partial and weighted transversals, the generation of all minimal infrequent itemsets in a database, all minimal Boolean or integer solutions to a monotone system of linear inequalities, all minimal spanning collections of given graphs and some other applications.This talk is based on joint papers with E. Boros, K. Elbassioni, V. Gurvich and K. Makino.
Monday, June 25, 2001, 14:15 - 15:45, IFW A 32
Heribert Vollmer
University Würzburg, Germany
We consider the following optimization problem for UMTS networks: For a specified teletraffic demand and possible base station locations, choose positions for base stations such thatWe prove that for a particular specification of teletraffic (the so called demand node concept), this problem has a polynomial-time approximation scheme, but cannot have a fully polynomial-time approximation scheme unless P=NP.
- the construction costs are below a given limit,
- as much teletraffic as possible is supplied,
- the ongoing costs are minimal, and
- the intra-cell interference in the range of each base station is low.
Monday, July 02, 2001, 14:15 - 15:45, IFW A 32
Volker Diekert
University Stuttgart, Germany
Famous results of Makanin state that the existential theories of equations in free monoids and in free groups are decidable. The algorithms of Makanin are very complex: In fact, in the case of free groups it is known that the scheme of Makanin is not primitive recursive.Recently Plandowski found a different approach to solve word equations and Gutiérrez extended his method to the case of free groups. As a consequence, it has been possible to replace a non-primitive recursive scheme for solving equations in free groups by some polynomial space bounded algorithm.
In my talk I will report about a further generalization which considers equations in free groups where solutions must satisfy rational constraints. In this form we obtained (in a joint work with Gutiérrez and Hagenah) a PSPACE-completeness result. I will also explain why rational constraints are important in various applications.
Winter 2000/01 , Summer 2000 , Winter 1999/2000 , Summer 1999 , Winter 1998/99 , Summer 1998 , Winter 1997/98 , Summer 1997
Last modified on 2001-06-14 08:58:32 by Falk Tschirschnitz <tschirsc@inf.ethz.ch> | Copyright © 2000-2001 |