Oι ομιλίες δίνονται (πλην εξαιρέσεων) στην Αίθουσα Τηλεσυσκέψεων, 2os
όροφος, Kέντρο Δικτύων, στο Tμήμα Πληροφορικής & Tηλεπικοινωνιών, oπότε
και μεταδίδονται σε πραγματικό χρόνο στο διαδίκτυο. Για οδηγίες σύγχρονης
παρακολούθησης και παρακολούθησης των αποθηκευμένων video, δείτε
http://lessons.di.uoa.gr/mediacenter.php.
Φυσική πρόσβαση: με λεωφορείο
ή μετρό-λεωφορείο.
(σε κίτρινο η αμέσως προσεχής ομιλία) |
|
|
|
12.00 ***Aίθουσα Δ' *** |
(U. Connecticut) |
|
3.00 μμ |
(Computation Science and Artificial Intelligence, Rey Juan Carlos University, Madrid) |
|
1.00 μμ |
|
|
***Aίθουσα Συνεδρ. Iσoγείου*** |
|
|
12.00 |
(U "La Sapienza") |
|
***Aίθουσα Δ*** |
|
|
3.00μμ (ακριβώς) |
(Stanford U.) |
|
Τε 7 Δεκεμβρίου,
2.00-4.00μμ ***Aίθουσα Δ*** |
Σειρά 2 ομιλιών: Παράλληλα ρομπότ, αριθμητική διαστημάτων και βελτιστοποίηση | Πρόγραμμα + Ομιλίες |
|
|
|
This talk will present some of the mathematics behind the asymmetric
cryptographic techniques that are used to provide secure communications
over the Internet. Number theory, probability, combinatorics and
computational complexity are all beautifully glued together to solve the
paradox of creating a secure channel using only insecure network communication.
Key-exchange, discovered over 30 years ago, remains today still one of
the major research subjects in cryptography and many aspects of it are
far from being understood. The talk will assume only basic background
on analysis of algorithms, probability and discrete mathematics.
Of the various proposals for dealing with default negation in logic
programming that go beyond the methods of ordinary Prolog, the Well-Founded
Semantics (WFS) of Van Gelder, Ross and Schlipf (1991) has proved to be
one of the most attractive and resilient. Particularly its favourable computational
properties have made it popular among system developers and the well-known
implementation XSB-Prolog is now extensively used in AI problem solving
and applications in knowledge representation and reasoning. This talk discusses
the logical foundations of WFS and proposes a solution to a long-standing
open problem: how to characterise partial stable and well-founded models
as minimal models in a suitable nonclassical, non-modal logic. We thereby
obtain a deductive base logic for well-founded inference as well
as a means to extend WFS to disjunctive programs and arbitrary propositional
theories. A major challenge is to axiomatise the base logic.
--------------
David Pearce obtained his D Phil degree (1980) from the University
of Sussex (UK) and his Habilitation (1986) from the Free University of
Berlin (Germany). He has been a Lecturer at the FU Berlin and an Acting
Professor at the universities of Goettingen and Heidelberg. From 1994-2000
he was Senior Researcher at the German Research Centre for Artificial Intelligence
(DFKI, Saarbruecken) where he coordinated the European Network of Excellence
in Computational Logic. From 2000-2002 he was Scientific Officer at the
European Commission (DG Information Society: Future and Emerging Technologies).
He was formerly a Heisenberg Research Fellow of the German Research Council,
DFG, and is currently a Ram?n y Cajal Research Fellow of the Spanish Ministry
of Education and Science, working at the Rey Juan Carlos University
in Madrid. He has published 2 monographs and around 80 research papers.
His current interests are in Logic and its applications to Artificial
Intelligence and Computer Science, but he has also written extensively
in the areas of Philosophy of Science and Technology.
Game theory and economics provide a mathematical framework for analyzing interactions among self-interested agents. However, the outcomes proposed by the economic theory often involve optimization problems with no known efficient algorithms.
In this talk we will study the computational complexity of various economic
solution concepts. The first part of the talk will focus on the problem
of allocating a set of indivisible goods to a set of agents, who express
preferences over combinations of items through their utility functions.
Several objective functions have been considered
in the economic literature in different contexts. In fair division
theory, a desirable outcome is to produce an allocation that minimizes
the envy between the agents or achieves additional fairness criteria. In
the context of auctions, an alternative goal is to maximize the social
welfare, i.e., the total utility derived by the agents. We provide new
approximation algorithms as well as hardness results for these objectives,
for various types of utility functions. The second part will focus on computing
or
approximating equilibrium concepts in games. I will present a result
on approximating Nash equilibria (outcomes from which no player has an
incentive to deviate) in noncooperative games.
Parts of this talk represent joint work with Richard Lipton, Amin Saberi,
Aranyak Mehta, Subhash Khot and Elchanan Mossel.
We will present a general framework based on concepts of relational algebra for distance based learning over relational schemata. The advantage of the proposed framework is that it requires no transformation of the representation of data that come in the form of relational databases. It is directly applicable to any relational database without the need of type and mode definitions and conversions to logic programming as it is the case with most relational learning systems based on Inductive Logic Programming.
Our framework builds on the notions of tuples of relations and sets of tuples. We show how exploiting these elementary building blocks our learning examples are represented via tree like structures. In order to define distances between relational examples we will explore two avenues. Both of them are based on the definition of simple operators on tuples and sets of tuples which are subsequently combined in order to provide a global operator on the full relational structure. The first approach is based on the use of classical distances over tuples and sets of tuples and the second one on the definition of kernels.
The user of the system has at his disposal a number of possible distance
operators from which he can choose, or alternatively, to what amounts to
"model selection", can let the system perform the selection automatically.
Some results on well known relational datasets will be presented.
In this paper we study the problem of data aggregation in a sensor network. Data aggregation is a technique that consists in aggregating redundant or correlated data in order to reduce the overall size of sent data, thus decreasing the network traffic and energy consumption. The framework we consider gives rise to a rich variety of interesting optimization problems. For some first results, we concentrate on sensor networks in which a routing tree connecting sensor nodes to the sink has been previously constructed and we assume that data collected at sensors have to reach the base station within a given deadline.
We give results for both the off-line and the on-line problem, in the
latter case also considering centralized vs distributed algorithms.
In 1951, Nash showed that every game has a mixed Nash equilibrium. His
proof is a reduction to Brouwer's fixpoint theorem and places the problem
of finding equilibria into the realm of 'exponential existence proofs'.
In fact, whether Nash equilibria can be computed in polynomial time has
remained open since that time, and has come under increased scrutiny during
the past two decades. In this talk, I will present some recent results
(joint work with Paul Goldberg and Christos Papadimitriou) that shed light
to this problem, in the negative direction.
Biologists in the 20th described the basic molecular processes governing life: (1) the "genetic dogma", describing how information in DNA is expressed into function within each cell; and (2) "evolution", describing how random changes in genetic information lead to adaptation of a variety of species. Today, biology is changing rapidly from a descriptive science, to a quantitative, information science. This change is driven by new technologies that enable high-throughput generation of biomolecular data. For example, we can now decipher the exact sequence of letters in our DNA; also, we can measure the activity of all genes in a cell with a single, inexpensive measurement.
As a result, computation has taken a central role in molecular biology.
Both the collection and the analysis of high-throughput biological data
require the use of sophisticated algorithms and software. In this talk,
I will follow the genetic dogma, "DNA -> gene transcript -> protein ->
cell" and show examples of the role of computation in quantifying each
of these steps: sequencing of DNA, prediction of gene transcripts, comparison
of proteins, and analysis of protein interactions that govern the functions
of a cell. I will also show examples of comparative genomics, where we
study how evolution works at the molecular level by comparing biological
data from many organisms, and more importantly how we can use these comparisons
to increase signal-to-noise ratio in our data and to better understand
the biology of a given organism.
2.15μμ, D. Daney (INRIA): Parallel robots, introduction and basic problems.
Introduction to Interval Arithmetic
3.15μμ, Y. Lebbah (Oran, Aλγερία): Solving Rigorously Equations and
Optimization Problems
D. Daney: Parallel robots: Introduction and basic problems: After
presenting the importance of the parallel manipulators for applications,
we propose to check some basic problems associated with this robot architecture,
namely: kinematics (inverse / Forward), workspace analysis, singularities,
design, and calibration. For each, we underline the different techniques
(numeric or algebraic) used and the open problems.
Introduction to Interval Arithmetic: Interval arithmetic denotes
a large collection of techniques and methods based on interval representation
of real numbers and on interval arithmetic, which produce numerically guaranteed
results. Approximating a set of numbers in such a framework gives an inner
and an outer approximation of this set, as precise as desired.
Y. Lebbah: Solving Rigorously Equations and Optimization Problems
(ps.gz)
Interval methods have shown their ability to locate and prove the existence
of solutions and global optima in a safe and rigorous way. Unfortunately,
these methods are rather slow. Efficient solvers for system-solving and
optimization problems are based on linear relaxations. However, the latter
are unsafe, and thus may overestimate, or worst, underestimate the very
global minima. We introduce an efficient and safe framework to rigorously
bound the global optima as well as its location. Our framework uses consistency
techniques to speed up the initial convergence of the interval narrowing
algorithms. A lower bound is computed on a linear relaxation of the constraint
system and the objective function. All these computations are based on
a rigorous implementation of linear programming techniques with Cplex and
Coin-Clp linear programming solvers in our solver iCOs.
We study how to explore efficiently an insecure network, namely a network that may contain a hostile node called black hole introduced by S. Dobrev, P. Flocchini, G. Prencipe, and N. Santoro (2001).
A black hole is a highly harmful stationary process residing in a node
of a network
and destroying all mobile agents visiting the node, without leaving
any trace. We consider the task of locating a black hole in a (partially)
synchronous network,
assuming an upper bound on the time of any edge traversal by an agent.
The minimum number of agents capable to identify a black hole is two. For
a given graph and given starting node we are interested in the fastest
possible black hole search by two agents.
For arbitrary trees, a 5/3 approximation algorithm has been given by J. Czyzowicz, D. Kowalski, E. Markou, and A. Pelc, while optimal algorithms have been given for special classes of trees (2004). R. Klasing, E. Markou, T. Radzik, and F. Sarracco proved that the problem is NP-hard (even in planar graphs) and gave a 27/8 approximation algorithm for arbitrary graphs (2005).
A variation of the problem where there is a subset of nodes known to
be safe, had been also proved NP-hard by J. Czyzowicz, D. Kowalski, E.
Markou, and A. Pelc, while they gave a 9.3 approximation algorithm (2004).
Finally R. Klasing, E. Markou, T. Radzik, and F. Sarracco improved this
approximation ratio to 6 and they showed that both versions of the problem
are APX-hard (2005).