{"id":1701,"date":"2010-05-19T22:30:02","date_gmt":"2010-05-20T02:30:02","guid":{"rendered":"http:\/\/www.easterbrook.ca\/steve\/?p=1701"},"modified":"2010-05-19T22:30:02","modified_gmt":"2010-05-20T02:30:02","slug":"studying-team-coordination-and-software-verification-practices-for-earth-system-modeling","status":"publish","type":"post","link":"http:\/\/www.easterbrook.ca\/steve\/2010\/05\/studying-team-coordination-and-software-verification-practices-for-earth-system-modeling\/","title":{"rendered":"Studying team coordination and software verification practices for earth system modeling"},"content":{"rendered":"<p>I&#8217;ve been busy the last few weeks setting up the travel details for my sabbatical. My plan is to visit three different climate modeling centers, to do a comparative study of their software practices. The goal is to understand\u00a0how the software engineering culture and practices vary across different centers, and how the differences affect the quality and flexibility of the models. The three centers I&#8217;ll be visiting are:<\/p>\n<ul>\n<li>The <a title=\"NCAR main site\" href=\"http:\/\/www.ncar.ucar.edu\/\" target=\"_blank\">National Center for Atmospheric Research<\/a> (NCAR) in Boulder Colorado;<\/li>\n<li>The <a title=\"Max Planck Institut fur Meteorologie main website\" href=\"http:\/\/www.mpimet.mpg.de\/en\/startseite.html\" target=\"_blank\">Max-Planck Institute for Meteorology<\/a> (MPI-M) in Hamburg,      Germany;<\/li>\n<li>The <a title=\"Institut Pierre Simon Laplace main website\" href=\"http:\/\/www.ipsl.fr\/\" target=\"_blank\">Institute Pierre Simon Laplace<\/a> (IPSL) in Paris, France.<\/li>\n<\/ul>\n<p>I&#8217;ll spend 4 weeks at each centre, starting in July, running through to October, after which I&#8217;ll spend some time analyzing the data and writing up my observations. Here&#8217;s my research plan&#8230;<\/p>\n<p>Our <a title=\"Serendipity: Our paper on the Hadley study\" href=\"http:\/\/www.easterbrook.ca\/steve\/?p=974\" target=\"_blank\">previous studies<\/a> at the UK Met Office Hadley Center suggest that there are many features of software development for earth system modeling that make it markedly different from other types of software development, and which therefore affect the applicability of standard software engineering tools and techniques. Tools developed for commercial software tend not to cater for the demands of working with high performance code for parallel architectures, and usually do not fit well with the working practices of scientific teams. Scientific code development has challenges that don&#8217;t apply to other forms of software: the need to keep track of exactly which version of the program code was used in a particular experiment, the need to re-run experiments with precisely repeatable results, the need to build alternative versions of the software from a common code base for different kinds of experiments. Checking software &#8220;correctness&#8221; is hard because frequently the software must calculate approximate solutions to numerical problems for which there is no analytical solution. Because the overall goal is to build code to explore a theory, there is no oracle for what the outputs should be, and therefore conventional approaches to testing (and perhaps <a title=\"Serendipity: Climate Models are Good Quality Software\" href=\"http:\/\/www.easterbrook.ca\/steve\/?p=1679\" target=\"_blank\">code quality<\/a> in general) don&#8217;t apply.<\/p>\n<p>Despite this potential mismatch, the earth system modeling community has adopted (and sometimes adapted) many tools and practices from mainstream software engineering. These include version control, bug tracking, automated build and test processes, release planning, code reviews, frequent regression testing, and so on. Such tools may offer a number of potential benefits:<\/p>\n<ul>\n<li>they may increase productivity by speeding up the development cycle, so that scientists can get their ideas into working code much faster;<\/li>\n<li>they may improve verification, for example using code analysis tools to identify and remove (or even prevent) software errors;<\/li>\n<li>they may improve the understandability and modifiability of computational models (making it easier to continue to evolve the models);<\/li>\n<li>they may improve coordination, allowing a broader community to contribute to and make use of a shared the code base for a wider variety of experiments;<\/li>\n<li>they may improve scalability and performance, allowing code to be configured and optimized for a wider variety of high performance architectures (including massively parallel machines), and for a wider variety of grid resolutions.<\/li>\n<\/ul>\n<p>This study will investigate which tools and practices have been adopted at the different centers, identify differences and similarities in how they are applied, and, as far as is possible, assess the effectiveness of these practices. We will also attempt to characterize the remaining challenges, and identify opportunities where additional tools and techniques might be adopted.<\/p>\n<p>Specific questions for the study include:<\/p>\n<ol>\n<li><strong><em>Verification \u2013<\/em><\/strong> What techniques are used to ensure that the code matches the scientists\u2019 understanding of what it should do? In traditional software engineering, this is usually taken to be a question of correctness (does the code do what it is supposed to?); however, for exploratory modeling it is just as often a question of understanding (have we adequately understood what happens when the model runs?). We will investigate the practices used to test the code, to validate it against observational data, and to compare different model runs against one another, and assess how effective these are at eliminating errors of correctness and errors of understanding.<\/li>\n<li><strong><em>Coordination<\/em><\/strong> \u2013 How are the contributions from across the modeling community coordinated? In particular, we will examine the challenges of synchronizing the development processes for coupled models with the development processes of their component models, and how the differences in the priorities of different, overlapping communities of users affect this coordination.<\/li>\n<li><strong><em>Division of responsibility<\/em><\/strong> \u2013 How are the responsibilities for coding, verification, and coordination distributed between different roles in the organization? In particular, we will examine how these responsibilities are divided across the scientists and other support roles such as \u2018systems\u2019 or \u2018software engineering\u2019 personnel. We will also explore expectations on the quality of contributed code from end-user scientists, and the potential for testing and review practices to affect the quality of contributed code.<\/li>\n<li><strong><em>Planning and release processes<\/em><\/strong> \u2013 How do modelers decide on priorities for model development, how do they decide which changes to tackle in a particular release of the model, and how they navigate between computational feasibility and scientific priorities? We will also investigate how the change process is organized, how changes are propagated to different sub-communities.<\/li>\n<li><strong><em>Debugging<\/em><\/strong> \u2013 How do scientists currently debug the models, what types of bugs do they find in their code currently, and how they find them? In particular, we will develop a categorization of model errors, to use as a basis for subsequent studies into new techniques for detecting and\/or eliminating such errors.<\/li>\n<\/ol>\n<p>The study will be conducted through a mix of interviews and observational studies, focusing on particular changes to the model codes developed at each center. The proposed methodology is to identify a number of candidate code changes, including recently completed changes and current work-in-progress, and to build a &#8220;life story&#8221; for each such change, covering how each change was planned and conducted, what techniques were applied, and what problems were encountered. This will lead to a more detailed description of the current software development practices, which can then be compared and contrasted with studies of practices used for other types of software. This end result will be an identification of opportunities where existing tools and techniques can be readily adapted (with some clear indication of the potential benefits), along with a longer-term research agenda for problem areas where no suitable solutions currently exist.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I&#8217;ve been busy the last few weeks setting up the travel details for my sabbatical. My plan is to visit three different climate modeling centers, to do a comparative study of their software practices. The goal is to understand\u00a0how the software engineering culture and practices vary across different centers, and how the differences affect the [&hellip;]<\/p>\n","protected":false},"author":392,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[52],"tags":[],"aioseo_notices":[],"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"","_links":{"self":[{"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/posts\/1701"}],"collection":[{"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/users\/392"}],"replies":[{"embeddable":true,"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/comments?post=1701"}],"version-history":[{"count":2,"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/posts\/1701\/revisions"}],"predecessor-version":[{"id":1767,"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/posts\/1701\/revisions\/1767"}],"wp:attachment":[{"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/media?parent=1701"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/categories?post=1701"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/tags?post=1701"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}