I thought this sounded very relevant: the 4th International Verification Methods Workshop. Of course, it’s not about software verification, but rather about verification of weather forecasts. The slides from the tutorials give a good sense of what verification means to this community (especially the first one, on verification basics). Much of it is statistical analysis of observational data and forecasts, but there are some interesting points on what verification actually means – for example, to do it properly you have to understand the user’s goals – a forecast (e.g. one that puts the rainstorm in the wrong place) might be useless for one purpose (e.g. managing flood defenses) but be very useful for another (e.g. aviation). Which means no verification technique is fully “objective”.
What I find interesting is that this really is about software verification – checking that large complex software systems (i.e. weather forecast models) do what they are supposed to do (i.e. accurately predict weather), but there is no mention anywhere of the software itself; all the discussion is about the problem domain. You don’t get much of that at software verification conferences…