So, I made it to ICSE at last. I’m way behind on blogging this one: the students from our group have been here for several days, busy blogging their experiences. So far, the internet connection is way too weak for liveblogging, so I’l have to make do with post-hoc summaries.

I spent the morning at the Socio-Technical Congruence (STC) workshop. The workshops is set up with discussants giving prepared responses to each full paper presentation, and I love the format. The discussants basically riff on ideas that the original paper made them think of. Which ends up being more interesting than the original paper. For example, Peri Tarr clarified how to tell whether something counts as a design pattern. A design pattern is a (1) proven solution to a (2) commonly occurring problem in a (3) particular context. To assess whether an observed “pattern” is actually a design pattern, you need to probe whether all these three things are in place. For example, the patterns that Marcelo had identified do express implemented solutions, but e has not yet identified the problems/concerns they solve, and the contexts in which the patterns are applicable.

Andy Begel’s discussion include a tour through learning theory (I’ve no idea why, but I enjoyed the ride!). On a single slide, he tooks us though the traditional “empty container” model of learning, though Piaget‘s constructivism; Vygotsky‘s social learning, Papert‘s constructionism), Van Maanen & Schein‘s newcomer socialization; Hutchins‘ distributed cognition and Lave & Wenger‘s legitimate peripheral participation. Whew. Luckily, I’m familiar with all of these except the Van Maanen & Schein stuff – I’m looking forward to read that. Oh and an interesting book recommendation “Anything that’s worth knowing is really complex” from Wolfram’s A New Kind of Science. Then, Andy posed some interesting question’s: how long can software live? How big can it get? How many people can work on it? And he proposed we should design for long-term social structures, rather than modular architecture.

We then spent some time discussing whether designing the software architecture is the same thing as designing the social structure. Audris suggested that while software architecture people tend not to talk about the social dimension, but in fact they are secretly designing it. If the two get out of synch, people are very adaptable – they find a way of working around the mismatch. Peri pointed out that technology also adapts to people. They are different things, with feedback loops that affect each other. It’s an emergent, adaptive thing.

And someone mentioned Rob DeLine’s keynote on the weekend at CHASE, in which pointed out that only about 20% of ICSE papers mention the human dimension, and we should seek to flip the ratio. To make it 80% we should insist that papers that ignore the people aspects have to prove that people are irrelevant to the problem being addressed. Nice!

After lots of catching up with ICSE regulars over lunch, I headed over to the last session of the Michael Jackson festschrift, to hear Michael’s talk. He kicked off with some quotes that he admitted he can’t take credit for: “description should precede invention”, and Tony Hoare’s: “there are 2 ways to make a system (1) make it so complicated that it has no obvious deficiencies or (2) make it so simple that it obviously has no deficiencies”. And another which may or may not be original: “Understanding is a process, not a state”. And another interesting book recommendation: Personal Knowledge by Michael Polanyi.

So, here’s the core of MJ’s talk: every “contrivance” has an operational principle, which specifies how the characteristic parts fulfill their function. Further, knowledge of physics, chemistry, etc, is not sufficient to understand and recognise the operating principle. E.g. describing a clock – the description of the mechanism is not a scientific description. While the physical science has made great strides, our description of contrivances has not. The operational principle answers questions like “What is it?” “What is it for?”,  and “how do the parts interact to achieve the purpose?”. To supplement this, the mathematical and scientific knowledge describes the underlying laws, context necessary for success (e.g. pendulum clock only works in the appropriate gravitational field, and must be completely upright – won’t work on the moon, on a ship, etc), part properties necessary for success, possible improvements, specific failures and causes, feasibility of a proposed contrivance.

MJ then goes on to show how problem decomposition works:

(1) Problem decomposition –  by breaking out the problem frames: e.g. for an elevator: provide prioritized lift service, brake on danger, provide information display for users.

(2) Instrumental decomposition – building manager specifies priority rules, system uses priority rules to determine operation.

The sources of complexity are the intrinsic complexity of each subproblem, plus the interaction of subproblems. But he calls for the use of free decomposition (meaning free as in unconstrained). For initial description purposes, there are no constraints on how the subproblems will interact; the only driver is that we’re looking for simple operating principles.

Finally, then he identified some composition concerns: interleaving (edit priority rules vs lift service); requirements elaboration (e.g. book loans vs member status), requirements conflict (linter-library vs member loan), switching (lift service vs emergency action), domain sharing (e.g. phone display: camera vs gps vs email).

The discussion was fascinating, but I was too busy participating to take notes. Hope someone else did!

Leave a Reply

Your email address will not be published. Required fields are marked *