I've spent most of this year wallowing through a huge amount of old bad code. Diving into this mess got me thinking about Prof. Samuel Holtzman's "Intelligent Design Systems", a book I read about twenty years ago (here's his TED talk). It changed how I approach the unknown: rather than seeing it as a big blob of nothing, I apply Holtzman s "Taxonomy of Ignorance" and dive in. The idea is to look at what you don't know and categorize it; understanding your ignorance makes it manageable.
Here's my programmer-biased view of Holtzman's taxonomy...
Combinatorial ('Computational' in the TED video)
You know there is a solution and you know how to get it, but the calculation costs prevent you from getting the answer; the foundation of encryption.
Watsonian
You have a complete model of the problem but lack an effective solution method, like Dr. Watson and Sherlock Holmes. Or the uber programmer that can see a solution that no one else sees, even though everyone has the same data.
Gordian (interesting that in the TED talk this comes after Ptolemaic; this is order in the book)
What do you do when you no longer have a complete model of the problem? Whatever solution method you have will not work; you need to restate the problem. Alexander the Great could not untie the knot that Gordius, king of Phrygia, tied for the future ruler of Asia. Instead, he reframed the problem of untying it and simply cut the knot.
Our team is dealing with code we find incomprehensible. It was written over a dozen years with little regard for maintainability and is the worse example of technical debt I've ever seen. Instead of trying to unravel it, we've decided to bypass it: figure out the API methods, isolate parts, replace them, and discard the old code. The job changed from 'make the old code work' to 'make the application work'; cutting was more effective than fixing.
Ptolemaic
In this case you have a solution to the problem that is based on an inadequate model, like the Ptolemaic earth-centric model of the solar system, vs. the more elegant Copernican model. Or, to put it another way, if all you have is hammer, everything looks like a nail.
Computer programming is in that state: schools teach Java almost exclusively and students see "programming" through a procedural C syntax filter. It's sad that they are not exposed to Prolog, Lisp, Smalltalk, assembler, SNOBOL ... anything that would shake up their mental model, and maybe even provide a more elegant one.
Magical
You know something works, but it contains essential elements for which you have no effective formalization. Your car works, you rely on it, but when it breaks you suddenly realize that you do not know how it works. And that's a fair deal: we can't know everything. We do rely on the knowledge of others.
I'm happy to program without knowing all the details of how my keystrokes translate into electrical impulses that eventually render images on a screen, but I sure do notice when it breaks.
When it comes to code that I'm responsible for, magical ignorance is not an option. Our project schedule has slipped because of this: we'd rather delay deploying than rely on code that we don't understand.
Dark
When we lack a model we have not way to approach a solution. "What is life? Why are we here?" ... not a comfortable question, so some find refuge in religion and faith. Guess it's better to believe something than to admit that you just don't know.
Fundamental
Finally, what about cases where we don't even know the question exists? We don't have a model because we don't know to look for one. Like the innocence of youth; no three year old worries about politics.
Apply this list, and add the understanding that you will know more tomorrow than you do today, you can start eating that elephant, one bit at a time.
Simple things should be simple. Complex things should be possible.
Saturday, 9 November 2013
Tuesday, 18 June 2013
Roassal visualization of Seaside components
At STIC 2013 Alexandre Bergel presented Roassal, a Smalltalk visualization engine. It looked like a nice fit for our project, where we build deeply nested Seaside views from VW window specs. Navigating the component structure can be confusing, so I decided to add a tree view using Roassal.
We have the ability to inspect individual components, and we added our own inspector subclass which gives us a place for a custom menu (in VW you can do that by overriding #inspectorClasses). The most used menu entry is 'Inspect Parent Path', which inspects an array of components built from walking the parent links from the selected component up to the root component.
The parent path is handy, but is does not provide enough context, and navigating to a component outside of the parent path is a pain. It would be better to see a parent tree, with siblings and labels. Each of our components answers #parentComponet and #components. For the parent tree we just added each parent and each parent's components (siblings) into a set. Coding it in Roassal was easy...
visualizeParentPath
| view list |
list := self parentPathWithComponents.
view := Roassal.ROMondrianViewBuilder view: Roassal.ROView new.
view shape rectangle
if: [:each | each hasUpdates] borderColor: Color red;
if: [:each | each == self] fillColor: Color yellow;
withText: [:each | each displayVisualizationLabel].
view interaction
item: 'inspect' action: #inspect;
item: 'visualize' action: #visualizeParentPath.
view nodes: list.
view edgesFrom: #parentComponent.
view treeLayout.
view open.
And here is what it looks like (the mouse is hovering over the 'I' input field; the popup is the printString of the component)...
This has proven to be quite handy. A big thanks to everyone that contributed to Roassal.
Simple things should be simple. Complex things should be possible. Alan Kay.
We have the ability to inspect individual components, and we added our own inspector subclass which gives us a place for a custom menu (in VW you can do that by overriding #inspectorClasses). The most used menu entry is 'Inspect Parent Path', which inspects an array of components built from walking the parent links from the selected component up to the root component.
The parent path is handy, but is does not provide enough context, and navigating to a component outside of the parent path is a pain. It would be better to see a parent tree, with siblings and labels. Each of our components answers #parentComponet and #components. For the parent tree we just added each parent and each parent's components (siblings) into a set. Coding it in Roassal was easy...
visualizeParentPath
| view list |
list := self parentPathWithComponents.
view := Roassal.ROMondrianViewBuilder view: Roassal.ROView new.
view shape rectangle
if: [:each | each hasUpdates] borderColor: Color red;
if: [:each | each == self] fillColor: Color yellow;
withText: [:each | each displayVisualizationLabel].
view interaction
item: 'inspect' action: #inspect;
item: 'visualize' action: #visualizeParentPath.
view nodes: list.
view edgesFrom: #parentComponent.
view treeLayout.
view open.
And here is what it looks like (the mouse is hovering over the 'I' input field; the popup is the printString of the component)...
This has proven to be quite handy. A big thanks to everyone that contributed to Roassal.
Simple things should be simple. Complex things should be possible. Alan Kay.
Thursday, 9 May 2013
Agile Ditch Digging
I'm a strong believer in agile software development. Every project I've worked on evolved into something every different from what it started as. Agile believes in trapping that type of change; it believes in understanding that you will know more as you build; agile deals with "managing ignorance".
On our current project we are migrating from a VW fat client to a Seaside application with short GS transactions that uses the legacy meta model, UI layouts and data structures. Our design isolates the UI layer in the VW Seaside server and the domain layer in GS. We parse the VW window spec into Seaside components and communicate with GS using RESTful data calls (nested arrays of strings and oop numbers), and get XML back.
All of that works well: it's quick, looks nice, scales and is much easier to wrap SUnit tests around. And it was built using agile development techniques. Mostly.
I find that agile works best in the 'construction' side of the work, where you can define the user stories and measure the pace of delivery (the ditch digging). There is, however, another flavour of software development, the R&D or 'creative' side, like designing the framework and tools that the application code rests on. It's not something the user sees; it's just part of the application's fabric.
Recently we thought about how we were dealing with widget level feedback. That's where you enter 'abc' in the 'name' field and get feedback when you move on to another field. If 'abc' is an invalid value, it would be useful to see that right then ('on blur'; when the widget loses focus), instead of waiting until the 'save' button is pressed. The same goes for updating depending values: if the 'comment' defaults to 'name', having it change when 'name' is entered is useful.
Our original approach was to do this behaviour in the web code, since it had access to the display components. We soon realized that is was more important that the code have access to the domain and be able to reuse the legacy validation code, so we moved the logic to GS. Seemed simple enough.
Turned out that shifting that one responsibility triggered a lot of framework redesign. Originally, the web component built up the set of changes, and passed them to GS on 'save'. It was simple and worked. With the field level code on GS, each 'on blur' event had to trigger a GS call and, more importantly, had to package the full view update state into call so the domain code would see the current displayed state . Performance is not an issue, since each call takes from 10 to 50 ms, but the code change was more complex than it originally seemed.
I could find no good way to communicate the status of this 'big bang' change. The problem was complex, then things got delayed due to other work, and we made some critical design changes as we understood the technical issues better. None of that was well communicated. From the outside looking in, the project just stalled. Precisely the kind of optics you don't want, and the kind of problem that agile techniques are supposed to deal with.
I simply do not know how to measure 'thinking time', especially my own. Finding a solution may take me five minutes, or five hours.
It was interesting to feel the pendulum swing from 'creative' to 'construction' as the work progressed; the construction phase is so much easier. Easier to do, easier to measure and easier to manage. Everyone is more at ease when you can show that you're 80% done, vs. just telling them that "you're close".
Digging a ditch is easy, assuming you know how.
Simple things should be simple. Complex things should be possible.
On our current project we are migrating from a VW fat client to a Seaside application with short GS transactions that uses the legacy meta model, UI layouts and data structures. Our design isolates the UI layer in the VW Seaside server and the domain layer in GS. We parse the VW window spec into Seaside components and communicate with GS using RESTful data calls (nested arrays of strings and oop numbers), and get XML back.
All of that works well: it's quick, looks nice, scales and is much easier to wrap SUnit tests around. And it was built using agile development techniques. Mostly.
I find that agile works best in the 'construction' side of the work, where you can define the user stories and measure the pace of delivery (the ditch digging). There is, however, another flavour of software development, the R&D or 'creative' side, like designing the framework and tools that the application code rests on. It's not something the user sees; it's just part of the application's fabric.
Recently we thought about how we were dealing with widget level feedback. That's where you enter 'abc' in the 'name' field and get feedback when you move on to another field. If 'abc' is an invalid value, it would be useful to see that right then ('on blur'; when the widget loses focus), instead of waiting until the 'save' button is pressed. The same goes for updating depending values: if the 'comment' defaults to 'name', having it change when 'name' is entered is useful.
Our original approach was to do this behaviour in the web code, since it had access to the display components. We soon realized that is was more important that the code have access to the domain and be able to reuse the legacy validation code, so we moved the logic to GS. Seemed simple enough.
Turned out that shifting that one responsibility triggered a lot of framework redesign. Originally, the web component built up the set of changes, and passed them to GS on 'save'. It was simple and worked. With the field level code on GS, each 'on blur' event had to trigger a GS call and, more importantly, had to package the full view update state into call so the domain code would see the current displayed state . Performance is not an issue, since each call takes from 10 to 50 ms, but the code change was more complex than it originally seemed.
I could find no good way to communicate the status of this 'big bang' change. The problem was complex, then things got delayed due to other work, and we made some critical design changes as we understood the technical issues better. None of that was well communicated. From the outside looking in, the project just stalled. Precisely the kind of optics you don't want, and the kind of problem that agile techniques are supposed to deal with.
I simply do not know how to measure 'thinking time', especially my own. Finding a solution may take me five minutes, or five hours.
It was interesting to feel the pendulum swing from 'creative' to 'construction' as the work progressed; the construction phase is so much easier. Easier to do, easier to measure and easier to manage. Everyone is more at ease when you can show that you're 80% done, vs. just telling them that "you're close".
Digging a ditch is easy, assuming you know how.
Simple things should be simple. Complex things should be possible.
Sunday, 24 March 2013
Inspecting nested Seaside components
I'm spending most of my time working on the tools to support a web port of a VW fat client application. We read windowSpecs and use them to build our own web 'widget' instances that know how to render themselves in Seaside using absolute positioning. This, combined with widget attribute meta data, has allowed us to automate most of the VW to Seaside port.
A challenge with this approach is that the Seaside views can get complex, with deeply nested subcanvas components. When debugging a button or input field widget, it's nice to not have to spend time getting to the specific instance. Seaside's Halos work well for that, but not when the view has a lot of components; things get lost is the noise.
A few fields...
...is all it takes to make things messy...
To make things easier we added a hidden 'inspect' icon to each VW based component that is only rendered if 'WAAdmin developmentToolsEnabled' is true. It's toggled by a WAToolPlugin subclass icon. Here is how the same view looks with the inspect anchors visible (the title for each inspect icon is the display string of the objedt that would be inspected when clicked)...
Each 'widget' is contained in a div for absolute positioning. Inside this div we added the inspect render method...
renderInspectOn: html
Seaside.WAAdmin developmentToolsEnabled ifFalse: [^self].
html anchor
class: 'subcanvasInspector';
style: 'display: none; position: absolute; ';
title: self displayString;
onClick: (html jQuery ajax
script: [:s | s << (html jQuery ajax callback: [self inspect])]);
with: [html image style: 'width: 12px; height: 12px; '; url: RepWebFileLibrary / #inspect16Gif].
...and then we toggle the display with...
renderInspectWidgetsToggleOn: html
html anchor
onClick: (html jQuery class: 'subcanvasInspector') toggle;
onClick: (html jQuery class: 'subcanvasInspectorPlus') toggle;
with: [
html image
class: 'subcanvasInspectorPlus';
url: Portal.RepWebFileLibrary / #inspectPlus24Gif.
html image
class: 'subcanvasInspectorPlus';
style: 'display: none; ';
url: Portal.RepWebFileLibrary / #inspectMinus24Gif].
Off...
On...
The other icons are for root component inspect, matching VW component inspect and a button to open the VW view (the deployed Seaside image has no VW domain view classes; these are being used during the application port).
With this setup we're able to port a VW application with 4497 window spec methods and keep our manual code work manageable.
Simple things should be simple. Complex things should be possible.
A challenge with this approach is that the Seaside views can get complex, with deeply nested subcanvas components. When debugging a button or input field widget, it's nice to not have to spend time getting to the specific instance. Seaside's Halos work well for that, but not when the view has a lot of components; things get lost is the noise.
A few fields...
...is all it takes to make things messy...
To make things easier we added a hidden 'inspect' icon to each VW based component that is only rendered if 'WAAdmin developmentToolsEnabled' is true. It's toggled by a WAToolPlugin subclass icon. Here is how the same view looks with the inspect anchors visible (the title for each inspect icon is the display string of the objedt that would be inspected when clicked)...
renderInspectOn: html
Seaside.WAAdmin developmentToolsEnabled ifFalse: [^self].
html anchor
class: 'subcanvasInspector';
style: 'display: none; position: absolute; ';
title: self displayString;
onClick: (html jQuery ajax
script: [:s | s << (html jQuery ajax callback: [self inspect])]);
with: [html image style: 'width: 12px; height: 12px; '; url: RepWebFileLibrary / #inspect16Gif].
...and then we toggle the display with...
renderInspectWidgetsToggleOn: html
html anchor
onClick: (html jQuery class: 'subcanvasInspector') toggle;
onClick: (html jQuery class: 'subcanvasInspectorPlus') toggle;
with: [
html image
class: 'subcanvasInspectorPlus';
url: Portal.RepWebFileLibrary / #inspectPlus24Gif.
html image
class: 'subcanvasInspectorPlus';
style: 'display: none; ';
url: Portal.RepWebFileLibrary / #inspectMinus24Gif].
Off...
On...
The other icons are for root component inspect, matching VW component inspect and a button to open the VW view (the deployed Seaside image has no VW domain view classes; these are being used during the application port).
With this setup we're able to port a VW application with 4497 window spec methods and keep our manual code work manageable.
Simple things should be simple. Complex things should be possible.
Sunday, 24 February 2013
RESTful GemStone with multiple sessions
The project I'm working on is moving to a full web deployment from a classic VW fat client + GS. Our layers are now Seaside on VW and GemStone. To allow for this migration, we've done interesting work with VW window specs and domain meta data, which I'll comment on in future posts. Here I'll go over how we are interfacing to GemStone from our Seaside server.
Our goal is to have a snappy application, so we view Seaside as a simple presentation layer. It contains no domain classes, only view components with widgets that are coded to know which attribute they display. Getting data from GS is done by packaging the list of displayed attributes, along with the oop of the displayed object, and sending it to GS. GS answers an XML string, which is parsed into 'domain node' objects (generic data holders) and then used by the display.
xmlDomainNodeOop:attributes:collectionOop: #(6944141057 #('comment') 656304641)
<domain>
<oop>6944141057</oop>
<objectClassName>BIDcustomer</objectClassName>
...
<domain>
<objectClassName>ByteString</objectClassName>
<text>comment</text>
... <value>a comment string</value>
</domain>
We develop in a full VW client, so debugging is done by sending the #xmlDomainNodeOop:attributes:collectionOop: method in a workspace. Very handy when trying to recreate a user problem.
If you've ever worked with GemStone (and, if you're programming in Smalltalk, my sympathies if you have not), you would be familiar with the GBS interface, the magical code which keeps objects in sync between the client and the server. You can choose to have methods execute where they make most sense (like UI heavy methods on the client and big data footprint methods on the sever) and know that your objects are in the correct state in both environments. Very cool.
With our oop + attributes question and XML answer, we don't make use of that GBS feature. In fact, we'd prefer a way to turn it off. Since each request and answer pair is independent, there is no need for session state and we can run each Seaside server with multiple sessions, using round robin dispatching.
Doing that introduces a few technical curiosities (and a huge thanks to Martin McClure for his help in fixing them). First, you can't share a class connector between GS sessions. In the single session model, I had a thin 'client dispatcher' class that forwarded class messages to server class that interfaced with the domain model and provided tools for building the XML answer. For multiple sessions, I had to link to an instance, so both class became singletons and each session defined its own connector.
Next, we tripped over the fact that you can't share objects between sessions. That makes sense in a GBS model, since syncing the objects is a session responsibly. But we were not passing domain objects and each request and answer were done with new Array and String instances. Turns out the problem was a method with an attribute parameter coded as #( ('value') ). Our parameter copy was not deep enough, so each tried to connect the nested array and we ended up with the error...
The session round robin mechanism is done with our on GS session wrapper. A class collection of session instances is rotated through, with the 'next session' pointer incremented after each use. If a session semaphore is busy, we skip to the next one. If they're all busy, we wait on the one we started with. We've been testing with three sessions, and since most of our GS session access takes less than 50ms, it's very rare that we a delay due to a busy session. We still have the occasional outlier (like the five seconds in this sample), but most of those are due to GS faulting pages in our test environment. We expect our production server to have everything cached.
It will be interesting to see how well we scale. Our system uses a dispatcher image to direct Seaside sessions to GS images, so we can adjust to load by increasing the number of Seaside images, and the number of GS sessions each image can support.
So far this technology mix is working well for us, and we're getting very positive feedback from our users.
Simple things should be simple. Complex things should be possible.
Our goal is to have a snappy application, so we view Seaside as a simple presentation layer. It contains no domain classes, only view components with widgets that are coded to know which attribute they display. Getting data from GS is done by packaging the list of displayed attributes, along with the oop of the displayed object, and sending it to GS. GS answers an XML string, which is parsed into 'domain node' objects (generic data holders) and then used by the display.
xmlDomainNodeOop:attributes:collectionOop: #(6944141057 #('comment') 656304641)
<domain>
<oop>6944141057</oop>
<objectClassName>BIDcustomer</objectClassName>
...
<domain>
<objectClassName>ByteString</objectClassName>
<text>comment</text>
... <value>a comment string</value>
</domain>
We develop in a full VW client, so debugging is done by sending the #xmlDomainNodeOop:attributes:collectionOop: method in a workspace. Very handy when trying to recreate a user problem.
If you've ever worked with GemStone (and, if you're programming in Smalltalk, my sympathies if you have not), you would be familiar with the GBS interface, the magical code which keeps objects in sync between the client and the server. You can choose to have methods execute where they make most sense (like UI heavy methods on the client and big data footprint methods on the sever) and know that your objects are in the correct state in both environments. Very cool.
With our oop + attributes question and XML answer, we don't make use of that GBS feature. In fact, we'd prefer a way to turn it off. Since each request and answer pair is independent, there is no need for session state and we can run each Seaside server with multiple sessions, using round robin dispatching.
Doing that introduces a few technical curiosities (and a huge thanks to Martin McClure for his help in fixing them). First, you can't share a class connector between GS sessions. In the single session model, I had a thin 'client dispatcher' class that forwarded class messages to server class that interfaced with the domain model and provided tools for building the XML answer. For multiple sessions, I had to link to an instance, so both class became singletons and each session defined its own connector.
Next, we tripped over the fact that you can't share objects between sessions. That makes sense in a GBS model, since syncing the objects is a session responsibly. But we were not passing domain objects and each request and answer were done with new Array and String instances. Turns out the problem was a method with an attribute parameter coded as #( ('value') ). Our parameter copy was not deep enough, so each tried to connect the nested array and we ended up with the error...
Attempt to associate a Array with more than one GemStone sessionReplacing the parameter with 'Array with: (Array with: 'value')) fixed the problem. We've since added our own deepCopy extensions to prevent these types of problems in the future.
The session round robin mechanism is done with our on GS session wrapper. A class collection of session instances is rotated through, with the 'next session' pointer incremented after each use. If a session semaphore is busy, we skip to the next one. If they're all busy, we wait on the one we started with. We've been testing with three sessions, and since most of our GS session access takes less than 50ms, it's very rare that we a delay due to a busy session. We still have the occasional outlier (like the five seconds in this sample), but most of those are due to GS faulting pages in our test environment. We expect our production server to have everything cached.
It will be interesting to see how well we scale. Our system uses a dispatcher image to direct Seaside sessions to GS images, so we can adjust to load by increasing the number of Seaside images, and the number of GS sessions each image can support.
So far this technology mix is working well for us, and we're getting very positive feedback from our users.
Simple things should be simple. Complex things should be possible.
Subscribe to:
Posts (Atom)