Not much to report from this morning, but here’s a few interesting talks from this afternoon:
15:30: Dick Schaap, speaking about SeaDataNet. Another big European project: 49 partners and 40 data centres. Most of the effort focusses on establishing standard data formats and metadata descriptions. The aim is to collect all the data providers into a federated system, with single portal, with a shopping basket for users to search for data they need, and secure access to them through a single sign-on. Oh, and they use Ocean Data View (ODV) for interactive exploration and visualization.
15:45: Roy Lowry, of the British Oceanographic Data Centre, whose talk is A RESTful way to manage ontologies. He covered some of the recent history of the NERC Datagrid, and some of the current challenges. 100,000 concepts, organised into about 100 collections. Key idea was to give each concept its own URN throughout the data and metadata, with a resolving service to get URLs from URNs. URLs instantiated as SKOS documents. Key issues:
- Versioning – if you embed version numbers in the URNs, you have many URNs per concept. So the lesson is to define the URN syntax so that it doesn’t include anything that varies over time.
- Deprecation – you can deprecate conecpts by moving the collection, so that the URN now refers to the replacement. But that means the URN of the deprecated concept changes. Lesson: deprecation implemented by change of status, rather than change of address.
- WSDL structure – RDF triples are implemented as complex types in WSDL. So adding new relationships requires a change in the WSDL, and changing the WSDL during operation breaks the system.
Oh, and this project supports several climate science initiatives: the Climate Science Modeling Language, and, of course, Metafor.
16:05: Massimo Santoro, on SeaDataNet interoperability, but I’m still too busy exploring the NDG website to pay much attention. Oh, this one’s interesting: Data Mashups based on Google Earth.
16:45: Oh, darn, I’ve missed Fred Spilhaus’s lecture on Boundless Science. Fred was executive director of the AGU for 39 years, until he retired last year. I’m in the wrong room, and tried the webstreaming, but of course it didn’t work. Curse this technology…
17:30: Now for something completely different: Geoengineering. Jason Blackstock, talking on Climate Engineering Responses to Climate Emergencies. Given that climate emergencies are possible, we need to know as much as possible about possible “plan B”s. Jason’s talk is about the outcome of a workshop last year, to investigate what research would be needed to understand the effects of geoengineering. They ignored the basic “should we” question, along with questions on whether consideration of geoengineering approaches might undercut efforts to reduce GHG emissions.
Here’s the premise: we cannot rule out the possibility that the planet is “twitchy“, and might respond suddenly and irreversibly to tipping points. In which case we might need some emergency responses to cool the planet again. Two basic categories of geogengineering – remove CO2 (which is likely to be very slow), or increase the albedo of the earth just a little bit (which could be very fast). The latter options are the most plausible. The most realistic of these are cloud whitening and stratospheric aerosols, so that’s what the workshop focussed on. We know aerosols can work fast because of the data from the eruption of Mt Pinatubo. Ken Caldeira and Lowell Wood did some initial modeling that demonstrated how geoengineering through aerosols might work.
But there are major uncertainties: transient vs. equilibrium response. Controllability and reversability; ocean acidification continues unaffected; plus we don’t know about regional effects, and effects on weather systems. Cost is not really an issue: $10B – $100B per year. But how do we minimize the potential for unanticipated consequences?
- Engineering questions: which aerosols? Most likely sulphates. How and where to deploy them? Lots of options.
- Climate science questions: What climate parameters will be affected by the intervention? What would we need to monitor? We need a ‘red team’ of scientists on hand to calculate the effects, and assess different options.
- Climate monitoring: what to we need to measure, with what precision, coverage, and duration, to keep track of how the deployment proceeding?
If we need to be ready with good answers in a ten-year timeframe, what research needs to be done to get there? Phase 1: Non-intervention research. Big issues: hard to learn much without intervention. Phase II: field experiments. Big issues: can’t learn much from a small ‘poke’; need to understand scaling. Phase III: Monitored deployment.
Non-technical issues: What are sensible trigger conditions? Who should decide whether to even undertake this research? Ethics of field tests? Dealing with winners and losers from deployment. And of course the risk of ‘rogue’ geoengineering efforts.
Takehome messages: research into geoengineering responses is no longer “all or nothing” – there are incremental efforts that can be undertaken now. Development of an ‘on the shelf’ plan B option requires a comprehensive and integrated research program – this is a 10-year research program at least.
Some questions: How would this affect acid rain? Not much, because we’re talking about something of the order of 1% of our global output of sulphurous aerosols, plus problems of acid rain are reducing steadily anyway. A more worrying concern would be effect on the tropospheric ozone.
Who decides? There are some scientists saying already we’ve reached a climate emergency. If the aim is to avoid dangerous tipping points (e.g. melting of the poles, destruction of the rainforests), at what point do we pull the trigger? No good answer to this one.
Read more: Journal special issue on geo-engineering.
Pingback: Data sharing and boundary objects | Serendipity