20. April 2009 · 4 comments · Categories: climate science, EGU 2009 · Tags:

Okay, here’s my first attempt to do liveblogging from the EGU General Assembly in Vienna. I’ve arrived and registered, but now I face my first problem: the sheer scale of the thing. The printed program runs to 140 pages, and it doesn’t even tell you the titles and authors of any of the papers – it mainly consists of table after table mapping session titles to rooms. Yikes! 

And the next problem is that the wireless internet is swamped at 10,000 geoscientists all try to check their email at once. There’s this neat tool on the conference website that allows you to click on any talk or session while you’re browsing the online program, to add them to your personal program. But as I can’t get the website to load now, I can’t see what I put on my personal program. Ho hum. Ah, it’s loaded.

Looks like I got here too late for Stefan Rahmsdorf’s review of the stability of the ocean circulation under climate change scenarios. I wanted to see it because Stefan has been consistently warning of higher sea level rise than the numbers in the IPCC reports.

Okay, next interesting paper is in the Earth Systems Informatics session, entitled “Semantic metadata application for information resources systematization in water spectroscopy“. Let’s see if I can find the room before the talk is over….

14:56: Found the room, but talk is nearly over. Next up is more promising anyway: Peter Fox, talking about Semantic Provenance Management for Large Scale Scientific Datasets…

15:06: Peter Fox is up. I should point out that this is the last talk in the session (which is on semantic interoperability, knowledge and ontologies). He’s showing some data flow analysis of the current way in which scientific images get passed around – processed images (e.g. gifs) get forwarded without their metadata. He’s characterized the problem as one of combining information from two different streams – the raw images from the instruments (which then get processed in various ways), and comments from observers (both human and tools) who are adding information about images. Another use case: two different graphs from the same data – a daily time series and a monthly time series (for Aerosol Optical Thickness), but the two averagings are produced by different tools, and one tool inverts the Y axis. The scientist working with the two graphs spent two days trying to figure out why the two series seemed to contradict each other! The key idea in this work seems to be the use of a semantic markup language, PML, for capturing additional information on datasets.

15:30: Next session is on International Informatics Collaborations and first speaker is Bryan Lawrence, (who incidentally, I’ve been meaning to get in contact with since he commented on Jon’s blog back in December about our brainstorming sessions). He’s talking about the European Contribution to a Global Solution For Access to Climate Model Simulations. Much of the data is available at WDCC: the World Data Centre on Climate. From this summer, they are expecting to collect a petabyte of data per year. Main task right now: CMIP5, which is the collection of model runs ready for comparison and analysis for the next IPCC assessment. Overall impression of Bryan’s talk: lots of technology problems, just shipping large streams of data around, authentication, and not enough bandwidth! Oh, and there’s a new European framework 7 project just starting up, IS-ENES, which is supposed to increase the collaboration around earth system modeling tools and frameworks.

15:45: Talk on EuroGEOSS, another big European project, just about to kick off next month. It’s hard to get my head around these big interoperability projects: to the outsider, it’s hard to tell what the project will actually focus on.

16:00: I like the title of the next one: Herding Cats: The Challenges of Collaboratively Defining the Geology of Europe, presented by Kristine Asch.  Here’s her motivation: Rich data exists, it’s critical for society, but it’s hard to find, access, share, understand. There is an EU directive on this, called INSPIRE: Infrastructure for Spatial Information in Europe. Looks like the directive involves lots of working groups. Kristine is talking about OneGeology, a project to create an interoperable geological map of Europe. The challenge is that everyone currently maps their geological survey data in different (incompatible) ways. Here’s an example of how deep the problem goes: 20 people sitting around a table trying to reach agreement on what terms like “bedrock”  and “surface geology” mean. Lots of cultural challenges: 27 countries (each with their own geological survey organisation), 27 different standards, national pride, and everyone speaks different variants of English.

Question Session: here’s a good question – have they thought of bringing in any social scientists or anthropologists to study the communities involved in this, to ensure we learn appropriate lessons from the experience. Okay, I’m off on a tangent now, because this question reminded me of an old colleague, Susan Leigh Star, and I just discovered she has a book out that I somehow missed: Sorting Things Out: Classification and it’s consequences

16:30: A talk on Siberia. Actually, on the Siberian Integrated Regional Study, important because of the role of melting permafrost in Siberia and its effect as a climate feedback. The disappointing thing about this talk is that he’s talking a lot about creating web portals, and not much about the real challenges.

17:30: Okay, change of scenery. There’s a session on Education, Computational Methods and Complex Systems in Nonlinear Proceses in Geophysics. First talk by Jeffrey Johnson (prof of complexity science) is about eToile, a Social Intelligent ICT-System for very large scale education in complex systems. Jeff was instrumental a few years ago in setting up the Complex Systems Society. The idea behind eToile (see also related project: Assyst) is to change how we design a curriculum. Normally: define curriculum & learning outcomes, create course materials, deliver them, examine the students, mark exam scripts, and pass/fail/grade the students. This is enormously expensive, especially the bits that are proportional to the number of students (e.g. marking is linear in # of students). Can set up a web portal to provide curriculum and assessment, with semi-automated marking. But how do we provide appropriate learning resources for very large numbers of students to use? Solution: create a “resource ecology”, initially populated with some junk URLs. When students submit work, they also have to submit the web resources they used; these become part of the ecology, and links that many students find useful rise in the ecology. Also, stuff that gets outdated falls in value as fewer students use it, or if the curriculum changes, you get the same effect. Jeff argues that this is a much cheaper, more scaleable way of providing education. In the question session, he clarified: it’s only intended for grad students (who are sufficiently capable of self-study), and probably wouldn’t work for undergrads. Reminds me of some of the experiments we’ve played with on my grad courses with the students contributing web resources to our growing collection.

17:45: Next up: a talk on education around flood management. Big paradigm change, from fighting floods, where engineers are the dominant stakeholder, to “living with floods” when it becomes much more of a shared issue among many different kinds of stakeholder. Set up displays, with lots of cylinders of water, to help people visualize different flood levels. Lots of hands on learning to get stakeholders to take ownership of the problem.

Okay, jetlag kicking in seriously. Off to get some air…