INRIA Saclay and ENS Cachan, France
Serge Abiteboul, Telecom Paris, Ph.D.'s in computer science, USC Los Angeles, and Thèse d'Etat, University of Paris Sud. His research is in databases, electronic commerce, document management, digital libraries and more recently on Web data management. He is a researcher at INRIA Saclay and ENS Cachan. He has held professor positions at Stanford and Ecole Polytechnique. He is one of the co-authors of Foundations of Databases, the reference in database theory. He co-founded in 2000 a start-up, named Xyleme. He received the 1998 ACM SIGMOD Innovation Award, and the 2007 Prix d'Informatique de l'Académie des Sciences. He has been program chair of a number of conferences including ICDT-90, ICALP-1994, ACM PODS-1995, ECDL-99 and VLDB-09. He has been awarded in 2008 an ERC Advanced Grant, namely Webdam, on Foundations of Web Data Management. He is a member of the French Académie des Sciences since 2008.
Web information management and Knowledge bases
The emergence of Web 2.0 and social network applications has enabled more and more users to share sensitive information over the Web. The information we manipulate has many facets: data, annotations, localization (e.g., bookmarks), login and keys, access rights, ontologies, beliefs, time and provenance information, etc. To find data, one typically has to perform a number of complex tasks such as search/query, authentication, data extraction. More and more, we also want to control how our personal data is used.
We will argue that all this should be viewed in the holistic context of a distributed knowledge base. More precisely, we use extensions of distributed datalog. Logical statements are used to capture these different facets of information that are typically considered in isolation. Knowledge can be communicated, replicated, queried, updated, and monitored. The fact that we use a formal model allows formally proving or disproving desirable properties such as soundness (data is only acquired legally) and completeness (one can acquire all data that one can legally claim). It also allows complex reasoning for searching information, within a rich mix of very different scenarios ranging from information in centralized servers to massively distributed, from fully trusted to untrusted, and providing encrypted or clear information, which is the reality of today's Web.
This is a joint work with Alban Galland, INRIA Saclay and ENS Cachan.
University of Southampton, UK
Wendy Hall is a Professor of Computer Science at the University of Southampton in the UK and was Head of the School of Electronics and Computer Science from 2002-2007.
In 2008 she was elected as President of the Association for Computing Machinery; the first person from outside North America to hold this position. She is a member of the Prime Minister’s Council for Science and Technology and is a founding member of the Scientific Council of the European Research Council.
She was awarded a DBE in the Queen’s New Year’s Honours list in 2009, and was elected to the Fellowship of the Royal Society in May 2009, and was the recipient of the 2009 Duncan Davies Medal, awarded by the Research and Development Society.
The Emerging Science of the Web and Why it is Important
With the advent of the internet and the World Wide Web we are able to share information as never before. The Web has become a critical global infrastructure. Since its emergence in the mid-1990s, it has exploded into hundreds of billions of pages that touch almost all aspects of modern life. Today the jobs of more and more people depend on the Web. Media, banking and health care are being revolutionized by it, and governments are even considering how to run their countries with it.
Little appreciated, however, is the fact that the Web is more than the sum of its pages and it is more than its technical protocols. Vast emergent properties have arisen that are transforming society. E-mail led to instant messaging, which on the Web has led to social networks such as Facebook and Twitter. The transfer of documents led to file-sharing sites such as Napster, which have led to user-generated portals such as blogs, Flickr and YouTube. Web 2.0, tagging content with labels, is creating online communities that share everything from concert news to health care. Looking forward we are adding to the Web of documents by creating a Web of linked data. It is our hypothesis that this will become the dominant data sharing and integration platform and that its effect on the world will be as profound and unexpected as the impact of the first Web.
As we seek to understand the origins of the Web, appreciate its current state and anticipate possible futures there is a need to address the critical questions that will determine how the Web evolves as both a social and a technical network. The emerging field of understanding these issues is becoming known as Web Science. In this talk we will explore how this new science of the Web has become established, the insights that are beginning to emerge and discuss the major research and education challenges ahead.
University of Trento, Italy
John Mylopoulos holds a distinguished professor position (chiara fama) at the University of Trento, and a professor
emeritus position at the University of Toronto. He earned a PhD degree from Princeton University in 1970 and joined
the Department of Computer Science at the University of Toronto that year. His research interests include conceptual
modelling, requirements engineering, data semantics and knowledge management. Mylopoulos is a fellow of the Association for the Advancement of Artificial Intelligence (AAAI) and the Royal Society of Canada (Academy of Sciences).
He has served as programme/general chair of international conferences in Artificial Intelligence, Databases and Software
Engineering, including IJCAI (1991), Requirements Engineering (1997), and VLDB (2004). He is currently serving as
co-editor of the Lecture Notes in Business Information Processing (LNBIP) series published by Springer-Verlag.
Requirements Engineering in the Days of Social Computing
Thanks largely to Web and other technologies, we are experiencing the rise of a new paradigm for computing that often
goes under the label of "social computing". In this paradigm, computing is conducted through services offered by one
agent (the server) to another (the client). These services are assembled dynamically and adapt, depending on circumstances. Moreover, the notion of "system" is extended to include software as well as human and organizational agents working
together towards the fulfilment of stakeholder requirements. Most importantly, social computing leverages knowledge of
human/organizational agents to conduct "computations" that go beyond traditional notions of computations. Early examples of this kind of computing include collaborative filtering, online auctions, prediction markets, reputation systems, etc.
The advent of this computing paradigm has changed drastically the nature of requirements for such systems. We review
traditional and goal-oriented approaches to requirements engineering and argue for the need to extend such approaches
(i) to accommodate the modeling and analysis of requirements preferences and priorities, (ii) to accommodate the notion
of social commitment as the basic building block for specifying solutions to social problems, (iii) to include a new class of
requirements we call awareness requirements. Such requirements impose constraints on adaptation mechanisms needed
to meet stakeholder needs.
The research reported in this presentation is based on on-going work between the author and Alex Borgida, Amit Chopra, Fabiano Dalpiaz, Neil Ernst, Paolo Giorgini, Ivan Jureta, Alexei Lapouchnian, and Vitor Souza.