Showing posts with label Smalltalk. Show all posts
Showing posts with label Smalltalk. Show all posts

Monday, 12 September 2016

A year goes by...

September 14, 2015, we launched a new ERP system, written in Smalltalk, running on GemStone and developed with VisualWorks. The system continues to grow, we keep adding features, and our users are mostly happy. 

It got me thinking about my relationship with the company and the project.

Projects have tension between the technical and business needs. The person paying the bills makes the final call and they are being asked to take a leap of faith; they don't see what the developers see. It takes time to build up trust, yet most of the key decisions, like which technology to use, are made at the start of the project, long before the technical team is truly trusted. 

In our case we got a lot right: use Smalltalk to deal with unique and complex business needs, use GemStone as the database to avoid the cost of object to relational mapping, use a web interface to avoid fat client issues, and use Seaside to allow for a single technology stack (we're Smalltalk all the way down). 

We got a few things wrong. The worst was thinking that an old fat client framework was worth keeping. It wasn't, and I strongly argued against it. But that's a tough call for someone that is not familiar with the code. They see a sunk cost. How could it not have value?  

Over time everyone realized just how bad the old framework was, but by then we had invested a lot of time and effort making the domain code run in a new web framework. We're still struggling to remove the last bad bits. But I can see the risk management decision on this: it was scary to agree to throw away the old code and move to something new and unproven. It's self evident now; it wasn't then.  

But it made me think: just how relevant is the technology decision, like which language or framework to use? Our users don't care. They need tools to do their job. Management doesn't care. They want IT to provide services at minimal cost. As a Smalltalk team we're very efficient. But so what? A java team would be easier to staff. Development would take longer, but they could get temporary help up to speed quickly to help get over humps. Technical consultants would actually be helpful (virtually none of the ones we've worked with knew Smalltalk).

And it's a general problem to anyone advocating an unconventional technology. Business might invest in a Smalltalk project if they see a return on investment, and if the risk is acceptable. But selling that vision in a world of deafening silence about Smalltalk is tough.

I haven't lost faith. Using Smalltalk allows us to be flexible in ways other teams could only dream of. Things will get a lot better, once we've scraped off the last of the old framework and are able to focus all of our time on building new stuff. I see a future where the development team is seen as a partner is the business. Where our ability to see business patterns and user flows gives us a voice.  Where we're not just a cost of doing business.

That's my vision: that Smalltalk projects allow the developers to be partners in the business, since they don't need to wallow in technical minutiae. They can stay in the business head space, so they can add value beyond the code. I see that happening in our project, and I think it's an important part of the story when advocating for Smalltalk.

I am looking forward to the next year.


Simple things should be simple. Complex things should be possible.

Sunday, 3 January 2016

Lessons learned

The project I've been working on since May, 2012, went live September 14, 2015. It's an ERP system for a sales company, which specializes in industrial HVAC rep sales (where you 'represent' the manufacturer). It is nice to announce the deployment of a 100% Smalltalk application, built with VisualWorks, GemStone and Seaside.

Our users are happy, mostly. They want more features, and they want them sooner than later. Not a bad place to be.

Personally, it's  been both a rewarding & frustrating project. Rewarding because I get to work for a far-sighted company that sees the value of a custom application, and can deal with the risks of using a niche technology. Frustrating because it could have been done better (which, I suspect, is true of just about any project).

The past couple of years have been a head down, ignore everything else, focused effort. We've done some interesting things, many of which I had hoped to share, but there never seems to be any time; work takes it all.

So, with the benefit of hindsight, here is what I've learned...

Have a project champion
Our project champion is one of the founders of the company. Without him risks would not have been taken, and the project would not have happened. We replaced a 20+ year old custom system that was also championed by the same person. He, and the company, believe that a custom ERP provides a competitive advantage. The old software proved it, which made selling the idea of a new system easier.

Smalltalk productivity rocks
Total effort was about 16 to 18 person years (our team size varied from 3 to 5 over 3.5 years). Compare that with effort to deploy something like SAP, and we look good. Our team's productivity will really shine as new features and customizations are rolled out over the next couple of years.

Expect a long tail of trivial things
What really stands out is how much time we spent (and continue to spend) on the little things. It tends to be boring, almost clerical work. But it's what users notice. Font sizes, colours, navigation sequences, default values, business rule adjustment... nothing intellectually challenging.

The beginning of the project was fun: figuring out how to use thousands of VW window specs in a Seaside application, including modal dialogs and dynamically morphing views. Finding ways to hold complex updates prior to a Save / Cancel decision. Building a new report framework that allows for edits and generates PDF content. Adding application permissions. Implementing a RESTful web-to-GS mechanism. And so on. All good stuff, but mostly done.

Pay your technical debt early
Looking back, we would have been better off not trying to preserve the ecosystem of the old fat client framework (the idea was to keep most of the domain code as is). Instead, we should have started from scratch, using the old system as a spec. The old framework was garbage. We knew it, but thought the technical debt could be managed. It was, but at a cost. We now know where we spent our time; it's evident switching to new code earlier would have allowed us to be deployed earlier.

If you see garbage code, be ruthless and get rid of it. Bad code is like a bed bug: it will keep biting until properly exterminated.

Show progress
Users need to see progress. And developers need feedback. We hit the jackpot with our beta users: they were willing to put up with a lot of early unfinished code. It gave them a view of what was to come, and they communicated that to the rest of the company. Our project dragged on a bit, but they saw progress, which made the delays palatable.  

Have clear metrics
It's so easy to get caught up in the moment, and to work on what is of interest right now, because that is what users see. But if you do that, you'll forget about the long term, and the important internal stuff just won't get done. If developers are not measured on the long term deliverables, there is little incentive to work on them.

Make long term metrics just as visible as short term ones. Break them down and make them part of each iteration, even if they are obscure and of no interest to the end users. It will be frustrating, You'll get asked "why are you working on that and not the feature I'm waiting for?". But they'll be far more aggravated if the application is not reliable. It's like backups: you don't notice their absence until you need them. Be sure they see the value of the boring internal tech stuff.

Use agile development 
We release a new production version every two weeks, with minor changes published twice each week. Developers merge their code every couple of hours. All new code is expected to have an SUnit test. The full test suite is run each night with Jenkins and keeping test green is the first developer priority. We pair up for tricky problems. Refactoring is considered to be a 'technical investment'. All changes are tracked (we have a nifty issue management tool).

Reviewing the process is part of the process. We adjust things almost every week. It's not easy, but getting to a smooth productive rhythm is so worth it.

What next? Mobile web interfaces, an Android app (we can do that with Pharo once a VM is ready), moving to a GS Seaside interface (need to port PDF4Smalltalk to GS), and a lots of small stuff.


Simple things should be simple. Complex things should be possible. - Alan Key

Thursday, 13 November 2014

GemStone based reports & views

My current project is a port of a VisualWorks & GemStone fat client application to Seaside. Part of the porting effort was to map a few thousand VW windowSpec views to a Seaside web views. It all works fine, but it's not ideal. Ported fixed layout views look like fat client windows; they lack a web aesthetic. We want all new views to be more 'web-centric', where positioning and sizing is adjusted by the browser, especially from tablets and mobile devices.

We also need to provide reports. For a web app, answering a PDF for a report works well.

We combined these two requirements and ended up with reports generated on GS using a Seaside-like coding pattern, which is then rendered in by Seaside in VW, and can be viewed as a PDF.

To build the reports we use Report4PDF, something I wrote a few years ago.  It uses PDF4Smalltalk to generate a PDF document. PDF4Smalltalk has not been ported to GemStone, something I'd like to do when time allows (and to VA & Pharo). Fortunately, Report4PDF generates intermediate output before requiring PDF4Smalltalk. This output can be created on GS, which is then moved to VW, where PDF4Smalltalk is used for the final output.

Our VW to GS interface uses only strings, either XML or evaluated command strings. In this case, the report objects are packaged as XML, and then recreated on VW. For most reports building and parsing the content takes about 200ms (we may move this to a command string, which is typically a third faster).

Once the report is in VW we use a 'report component' for the rendering, which reads the report content and builds the Seaside output. Because Report4PDF has a Seaside-like coding style, the mapping is relatively simple.

For example, a table is defined as...

aTable row: [:row | 
row cell: [:cell | cell widthPercent: 20. cell text bold; string: 'Job'].
row cell: [:cell | cell widthPercent: 30. cell text; string: self job description].
row cell: [:cell | cell widthPercent: 20. cell text bold; string: 'Our Job ID'].
row cell: [:cell | cell widthPercent: 30. cell text; string: self job id]].

...and gets rendered as...



...the PDF output is...



...to build the PDF content we use the data already in VW.  No additional GS call is needed.

R4PObject, the root Report4PDF class, has a #properties instance variable to support extensions. We use this to add link and update capabilities to the report when it is rendered in Seaside.

For example, a link to another domain object is coded as...
row cell right bold string: 'Designer'.
row cell text normal string linkOop: self designer domainOop; string: self designer displayKeyString.

...and displayed as...



...but is ignored in the PDF output...


The beauty of this approach is that all of the report generation is done on GemStone, with generic rendering and PDF generation in our VW Seaside code.

Our users are happy with this approach. They like the look of the web rendered report and the option to get the content as a PDF. Having link and simple update capabilities means that most users will not need to use the old fat clients views, which tend to be used by power users, for data entry and for detailed updates.

Simple things should be simple. Complex things should be possible. - Alan Key

Thursday, 22 May 2014

Smalltalk performance measurement

The application I'm working on uses VisualWorks and GemStone. As we've built out our application and loaded more test data we find ourselves spending more time turning performance. If there is one thing I've learned over the years that is performance problems are never what they seem: always measure before you change the code. Premature optimization makes your code ugly and, more likely than not, adds no value.

On VW we use TimeProfiler & KaiProfiler, and on GS we use ProfMonitor. All are useful for getting a sense of where to look, after which we switch to more basic tools, like...

Time>>millisecondsToRun:, along with some convenience methods.

In VW you can use the Transcript to show performance measurements.
You could write something like...

Transcript cr; show: 'tag for this code'; show: (Time millisecondsToRun: [some code]) printString.

...but that's a pain. And you'd need the 'tag for this code' if you have several measurements spread throughout the code. To make that easier, we use...

'tag for this code' echoTime: [some code]

...which is implemented as...

echoTime: aBlock
| result microseconds | 
microseconds := Time microsecondsToRun: [result := aBlock value].
self echo: microseconds displayMicroseconds.
^result

...the #echo: method is commented on in a previous post and #displayMicroseconds is just...

Integer>>displayMicroseconds
self > 1000 ifTrue: [^(self // 1000) displayTime].
^self printString, 'µs'

...and displayTime shows hh:mm:ss.mmm with hh and mm displayed if needed.

In GS you could use the VW transcript with a client forwarder, but our application uses a simplified GS interface model with only one forwarder and no replication (an XML string is the only returned value from GS), so adding a client forwarder was something I did not want to do. I also wanted to run some tests from a Topaz script.

Instead of a Transcript I use a String WriteStream held in a GS session variable and use Time class methods to add measurements and show the results. To measure a block of code, we add nested time measurements with...

Time log: 'tag for this code' run: [some code]

...and we wrap the top method send with...

Time showLog: [top method] 

...some methods get called a lot, so we'd like a total time. For that we use...

Time sum: 'tag for this method' run: [some code] 

...because each Time method answers the block result we can insert the code easily...

someValue := self bigMethod
...vs...
someValue := Time log: 'bigMethod' run: [self bigMethod]


These are the methods...

Time>>showLog: aBlock
self timeSumDictionary: Dictionary new.
self timeLogStream: String new writeStream.
self timeLogStream nextPutAll: 'Time...'.
self log: 'time' run: aBlock.
^self timeLogStream contents , self displayTimeSums

...each time* variable is stored in the GS session array, like...

timeLogStream
^System __sessionStateAt: 77

timeLogStream: anObject
System __sessionStateAt: 77 put:  anObject

log: aMessage run: aBlock 
"Time showLog: [Time log: 'test' run: [(Delay forSeconds: 1) wait] ]"
| result microseconds | 
microseconds := self millisecondsToRun: [result := aBlock value].
self timeLogStreamAt: aMessage put: microseconds.
^result

timeLogStreamAt: aMessage put: anInteger
| stream | 
stream := self timeLogStream.
stream isNil ifTrue: [
stream := String new writeStream.
self timeLogStream: stream].
stream 
cr; nextPutAll: aMessage; tab; 
nextPutAll: anInteger displayTime.
self timeSumDictionaryAt: aMessage add: anInteger.

sum: aMessage run: aBlock 
"Time showLog: [
Time sum: 'test' run: [(Delay forSeconds: 1) wait].
Time sum: 'test' run: [(Delay forSeconds: 1) wait]]"
| result milliseconds | 
milliseconds := self millisecondsToRun: [result := aBlock value].
self timeSumDictionaryAt: aMessage add: milliseconds.
^result

timeSumDictionaryAt: aKey add: aValue
| dictionary total | 
dictionary := self timeSumDictionary.
dictionary isNil ifTrue: [
dictionary := Dictionary new.
self timeSumDictionary: dictionary].
total := dictionary at: aKey ifAbsent: [0].
total := total + aValue.
dictionary at: aKey put: total.


Simple things should be simple. Complex things should be possible.

Tuesday, 18 June 2013

Roassal visualization of Seaside components

At STIC 2013 Alexandre Bergel presented Roassal, a Smalltalk visualization engine. It looked like a nice fit for our project, where we build deeply nested Seaside views from VW window specs. Navigating the component structure can be confusing, so I decided to add a tree view using Roassal.

We have the ability to inspect individual components, and we added our own inspector subclass which gives us a place for a custom menu (in VW you can do that by overriding #inspectorClasses). The most used menu entry is 'Inspect Parent Path', which inspects an array of components built from walking the parent links from the selected component up to the root component.

The parent path is handy, but is does not provide enough context, and navigating to a component outside of the parent path is a pain. It would be better to see a parent tree, with siblings and labels. Each of our components answers #parentComponet and #components. For the parent tree we just added each parent and each parent's components (siblings) into a set. Coding it in Roassal was easy...


visualizeParentPath

| view list |

list := self parentPathWithComponents.
view := Roassal.ROMondrianViewBuilder view: Roassal.ROView new.
view shape rectangle 
if: [:each | each hasUpdates] borderColor:  Color red;
if: [:each | each == self] fillColor:  Color yellow;
withText: [:each | each displayVisualizationLabel].
view interaction 
item: 'inspect' action: #inspect;
item: 'visualize' action: #visualizeParentPath.
view nodes: list.
view edgesFrom: #parentComponent. 
view treeLayout.
view open.


And here is what it looks like (the mouse is hovering over the 'I' input field; the popup is the printString of the component)...


This has proven to be quite handy. A big thanks to everyone that contributed to Roassal.

Simple things should be simple. Complex things should be possible.  Alan Kay. 

Thursday, 9 May 2013

Agile Ditch Digging

I'm a strong believer in agile software development. Every project I've worked on evolved into something every different from what it started as. Agile believes in trapping that type of change; it believes in understanding that you will know more as you build; agile deals with "managing ignorance".

On our current project we are migrating from a VW fat client to a Seaside application with short GS transactions that uses the legacy meta model, UI layouts and data structures. Our design isolates the UI layer in the VW Seaside server and the domain layer in GS.  We parse the VW window spec into Seaside components and communicate with GS using RESTful data calls (nested arrays of strings and oop numbers), and get XML back.

All of that works well: it's quick, looks nice, scales and is much easier to wrap SUnit tests around. And it was built using agile development techniques. Mostly.

I find that agile works best in the 'construction' side of the work, where you can define the user stories and measure the pace of delivery (the ditch digging). There is, however, another flavour of software development, the R&D or 'creative' side, like designing the framework and tools that the application code rests on. It's not something the user sees; it's just part of the application's fabric.

Recently we thought about how we were dealing with widget level feedback. That's where you enter 'abc' in the 'name' field and get feedback when you move on to another field. If 'abc' is an invalid value, it would be useful to see that right then ('on blur'; when the widget loses focus), instead of waiting until the 'save' button is pressed. The same goes for updating depending values: if the 'comment' defaults to 'name', having it change when 'name' is entered is useful.

Our original approach was to do this behaviour in the web code, since it had access to the display components. We soon realized that is was more important that the code have access to the domain and be able to reuse the legacy validation code, so we moved the logic to GS. Seemed simple enough.

Turned out that shifting that one responsibility triggered a lot of framework redesign. Originally, the web component built up the set of changes, and passed them to GS on 'save'. It was simple and worked. With the field level code on GS, each 'on blur' event had to trigger a GS call and, more importantly, had to package the full view update state into call so the domain code would see the current displayed state . Performance is not an issue, since each call takes from 10 to 50 ms, but the code change was more complex than it originally seemed.

I could find no good way to communicate the status of this 'big bang' change. The problem was complex, then things got delayed due to other work, and we made some critical design changes as we understood the technical issues better. None of that was well communicated. From the outside looking in, the project just stalled. Precisely the kind of optics you don't want, and the kind of problem that agile techniques are supposed to deal with.

I simply do not know how to measure 'thinking time', especially my own.  Finding a solution may take me five minutes, or five hours.

It was interesting to feel the pendulum swing from 'creative' to 'construction' as the work progressed; the construction phase is so much easier. Easier to do, easier to measure and easier to manage. Everyone is more at ease when you can show that you're 80% done, vs. just telling them that "you're close".

Digging a ditch is easy, assuming you know how.


Simple things should be simple. Complex things should be possible.

Sunday, 24 March 2013

Inspecting nested Seaside components

I'm spending most of my time working on the tools to support a web port of a VW fat client application. We read windowSpecs and use them to build our own web 'widget' instances that know how to render themselves in Seaside using absolute positioning. This, combined with widget attribute meta data, has allowed us to automate most of the VW to Seaside port.

A challenge with this approach is that the Seaside views can get complex, with deeply nested subcanvas components. When debugging a button or input field widget, it's nice to not have to spend time getting to the specific instance. Seaside's Halos work well for that, but not when the view has a lot of components; things get lost is the noise.

A few fields...

...is all it takes to make things messy...

To make things easier we added a hidden 'inspect' icon to each VW based component that is only rendered if 'WAAdmin developmentToolsEnabled' is true. It's toggled by a WAToolPlugin subclass icon. Here is how the same view looks with the inspect anchors visible (the title for each inspect icon is the display string of the objedt that would be inspected when clicked)...

Each 'widget' is contained in a div for absolute positioning. Inside this div we added the inspect render method...

renderInspectOn: html
Seaside.WAAdmin developmentToolsEnabled ifFalse: [^self].
html anchor
class: 'subcanvasInspector'; 
style: 'display: none; position: absolute; '; 
title: self displayString; 
onClick: (html jQuery ajax 
    script: [:s | s << (html jQuery ajax callback: [self inspect])]);
with: [html image style: 'width: 12px; height: 12px; '; url: RepWebFileLibrary / #inspect16Gif].

...and then we toggle the display with...

renderInspectWidgetsToggleOn: html
    html anchor 
      onClick: (html jQuery class: 'subcanvasInspector') toggle;
      onClick: (html jQuery class: 'subcanvasInspectorPlus') toggle;
      with: [
        html image 
  class: 'subcanvasInspectorPlus'; 
     url: Portal.RepWebFileLibrary / #inspectPlus24Gif.
html image 
  class: 'subcanvasInspectorPlus'; 
     style: 'display: none; '; 
  url: Portal.RepWebFileLibrary / #inspectMinus24Gif].


Off...
On...

The other icons are for root component inspect, matching VW component inspect and a button to open the VW view (the deployed Seaside image has no VW domain view classes; these are being used during the application port).

With this setup we're able to port a VW application with 4497 window spec methods and keep our manual code work manageable.

Simple things should be simple. Complex things should be possible.

Sunday, 24 February 2013

RESTful GemStone with multiple sessions

The project I'm working on is moving to a full web deployment from a classic VW fat client + GS. Our layers are now Seaside on VW and GemStone. To allow for this migration, we've done interesting work with VW window specs and domain meta data, which I'll comment on in future posts. Here I'll go over how we are interfacing to GemStone from our Seaside server.

Our goal is to have a snappy application, so we view Seaside as a simple presentation layer. It contains no domain classes, only view components with widgets that are coded to know which attribute they display. Getting data from GS is done by packaging the list of displayed attributes, along with the oop of the displayed object, and sending it to GS. GS answers an XML string, which is parsed into 'domain node' objects (generic data holders) and then used by the display.

xmlDomainNodeOop:attributes:collectionOop: #(6944141057 #('comment') 656304641)


<domain>
<oop>6944141057</oop>
<objectClassName>BIDcustomer</objectClassName>
...
<domain>
<objectClassName>ByteString</objectClassName>
<text>comment</text>
... <value>a comment string</value>
</domain>

We develop in a full VW client, so debugging is done by sending the #xmlDomainNodeOop:attributes:collectionOop:  method in a workspace. Very handy when trying to recreate a user problem.

If you've ever worked with GemStone (and, if you're programming in Smalltalk, my sympathies if you have not), you would be familiar with the GBS interface, the magical code which keeps objects in sync between the client and the server. You can choose to have methods execute where they make most sense (like UI heavy methods on the client and big data footprint methods on the sever) and know that your objects are in the correct state in both environments. Very cool.

With our oop + attributes question and XML answer, we don't make use of that GBS feature. In fact, we'd prefer a way to turn it off. Since each request and answer pair is independent, there is no need for session state and we can run each Seaside server with multiple sessions, using round robin dispatching.

Doing that introduces a few technical curiosities (and a huge thanks to Martin McClure for his help in fixing them). First, you can't share a class connector between GS sessions. In the single session model, I had a thin 'client dispatcher' class that forwarded class messages to server class that interfaced with the domain model and provided tools for building the XML answer. For multiple sessions, I had to link to an instance, so both class became singletons and each session defined its own connector.

Next, we tripped over the fact that you can't share objects between sessions. That makes sense in a GBS model, since syncing the objects is a session responsibly. But we were not passing domain objects and each request and answer were done with new Array and String instances. Turns out the problem was a method with an attribute parameter coded as #( ('value') ). Our parameter copy was not deep enough, so each tried to connect the nested array and we ended up with the error...
Attempt to associate a Array with more than one GemStone session
Replacing the parameter with 'Array with: (Array with: 'value')) fixed the problem. We've since added our own deepCopy extensions to prevent these types of problems in the future.

The session round robin mechanism is done with our on GS session wrapper. A class collection of session instances is rotated through, with the 'next session' pointer incremented after each use. If a session semaphore is busy, we skip to the next one. If they're all busy, we wait on the one we started with. We've been testing with three sessions, and since most of our GS session access takes less than 50ms, it's very rare that we a delay due to a busy session. We still have the occasional outlier (like the five seconds in this sample), but most of those are due to GS faulting pages in our test environment. We expect our production server to have everything cached.



It will be interesting to see how well we scale. Our system uses a dispatcher image to direct Seaside sessions to GS images, so we can adjust to load by increasing the number of Seaside images, and the number of GS sessions each image can support.


So far this technology mix is working well for us, and we're getting very positive feedback from our users.


Simple things should be simple. Complex things should be possible.

Thursday, 12 April 2012

Opening a Seaside view from VW

I spend a good chunk of my time working on adding web interfaces to legacy Smalltalk applications, both in VW and VA. The larger project is in VW and is built with a big in-house framework. A couple of years ago I wrote a VW windowSpec to Seaside component builder which used the framework metadata to bind Seaside components to domain objects. It worked, but required too much of an investment to fully deploy.

So, we took another look at what clients needed from a web interface and decided that a 'portal' model was a better fit: a limited access web site useful to a subset of users. It is implemented with a Seaside image that has no domain objects, just parsed XML data from a RESTful GS interface. Seaside sessions share one GemStone session and rely on the application framework for login and security. It works nicely.

One of the views is table display of competitors by project, showing who is bidding on which section of the project, their bid status (won / lost / undecided), the estimated bid amount, and so on. This particular display was a challenge to do in the VW framework because it only supports a fixed number of columns in a table, and does not allow for in-cell editing (there may one day be support for the dataset widget).

We still wanted to make this display available to the VW users and the Seaside table looked nice. The solution was to launch a browser showing the selected table from a VW button press. This hybrid user interface (VW + web browser) may allow for a smoother incremental deployment of a full web based interface vs. an expensive and disruptive big bang approach.

When the 'Show table' button is pressed, a session token is saved on the logged in 'user' object in GemStone (each user has their own 'user' instance which handles things like application login and security). The oop of the saved session token is passed as a URL parameter (ExternalWebBrowser open: '...?start=12345678), and the token contents (user oop and timestamp) are checked to see if it is valid: oop of the user object must match the user object that contains the token & the timestamp of the token must be within a few seconds. If it matches, the token is cleared and a Seaside session is established. Each token can only be used once, for a short time and to access an internal web site; seems reasonably safe.

The token also contains display information which the Seaside image uses to build the table; a user presses a button and a browser opens on the expected table. Changes are stored in GemStone, so both the browser and the VW client see the same data.

Flyover components are rendered to display attributes and allow for updates. Users can change the status of a bid by pressing 'won', 'lost' or 'unknown' buttons in the flyover component. This is a quick way to edit the bid state vs. the VW based multi-window, multi-click sequence. I tried to use Seaside's jQuery tools to build the onMouseOver and onMouseOut scripts, but I found it simpler to just write the few lines I needed.

This script, as passed to table data's #onMouseOver: , positions the hidden flyover component (aFlyoverId) to the left and top of the cell under the mouse (aCellId), and then shows it. I was able to do this with Seaside jQuery code, but I could not figure out how to add the cell width to the flyover's 'left' position.


onMouseOverFlyoverId: aFlyoverId cellId: aCellId
^'
$("#', aFlyoverId ,'").css("top",$("#', aCellId ,'").position().top);
$("#', aFlyoverId ,'").css("left",$("#', aCellId ,'").position().left + $("#', aCellId ,'").width() + 8);
$("#', aFlyoverId ,'").show();
$("#', aCellId ,'").css("background-color","#F2F7FA");
'


The flyover component has its own #onMouseOver: script to keep it visible when the mouse moves away from the cell and over the flyover component.


Views that show a consolidated view of objects, like the competitor table, are good candidates for the initial web interface. The XML based data gathering from GS is quick, since no domain objects are faulted to the client, and the display options are more flexible. Whereas VW fat client's detailed object level views are better for fine grain data.

The next step is to merge the windowSpec Seaside component builder with the RESTful web portal. Not hard to do, but we'll need to see if there is client interest.

Simple things should be simple. Complex things should be possible.

Thursday, 8 March 2012

Who needs objects?

Dave Thomas has said that the object abstraction is too complex for the majority of programmers. Most business software is CRUD with a bit of business logic mixed in. And it can scale by building loosely coupled systems (works for the internet, eh). Dave is a giant; he sees far. I think he's right.

So what does this mean to an object evangelist like me?  Probably not much. That vacant look on most people's faces when you try to explain objects says it all. If it does not address an immediate need, object abstraction is noise.

At the Toronto Smalltalk User Group meetings we sometimes have one or two students from Ryerson University. By attending they've already indicated that they're interested in more than the generic C syntax procedural stuff they learn in school. Joshua Panar and Dave Mason, the two profs that sponsor our group and use Smalltalk in their OO course, have said that getting the regular students out is a challenge. They're not interested. They don't see it as improving their education or job prospects. Suggesting that they should broadening their horizons falls on deaf ears.

There are two types of programmers: the toolsmiths (abstractionists) and the tool users (constructionists). Smalltalk developers seem to all be abstractionists. It is natural of us to extending our environment. Want a framework? Build it. Need a new compiler behavior? Add it. It's easy; it's common for us, yet unheard of by others.

Most programmers are constructionists. They have a job building and maintaining business applications. As Smalltalkers we ask ourselves: how can we get these programmers to use Smalltalk, to see how much more productive and enjoyable our environment is? The answer, I believe, is to reduce barriers to entry.

How to do that? Here are my wishful thinking answers...

  • Merge the dialects (ya, I know: unlikely). Selecting a dialect as the first step in exploring Smalltalk is a big problem. You need to know a lot to make a good decision, at the point where you know little. Yes, the VW & Digitalk merger was a bust. But that was another time. But I can dream...
  • Use a common online forum.  The Balkanization of the Smalltalk community is a problem. Think of how hard it is for a Smalltalk curious person to find information. If we at least used a common forum, like Stack Overflow, it would be easier to find cross dialect posts, and it would be more visible to the larger developer community. I'll advocate for it again at the the upcoming STIC conference, but I must be turning into a cynical cranky old fart, because I don't think there will be a change.
  • Support simple scripting. I know it's been done in various ways (S# was cool), but we should be able to point to a simple script tool for people to try. If there is a good option out there, consider this: I'm a Smalltalk cognoscenti, and I'm not aware of an option that does not require firing up an image. What does that say about how well we get the word out?
  • Start with prototype objects. Self and javascript got it right. It is easier to explain objects if you can defer talking about classes, and where the value of classes is discovered as a useful pattern.
  • Make Smalltalk IDEs rock. I know the Smalltalk vendors and volunteers have done a great job with the resources available, but VisualStudios and Eclipses of the world are slick by comparison.
  • Examples. Lots of examples. It would be great if we could point to real applications that people could fire up, test and explore. And templates; wizard driven templates to help build new applications, like those found in MS Access. Need an application to track students? Here's an example and / or a tool to help you get started. If nothing else, make it easy to get started.

Yes, abstractions are hard. But abstraction allows you to do things that would be far too difficult and expensive otherwise. Knowing how to think in abstract terms is a powerful skill that will make you a better technologist. It is our job, as those that understand this, to make it self evident to others.

Simple things should be simple. Complex things should be possible.

Sunday, 8 January 2012

PDF Report and the Law of Demeter

I'm finishing a small project which uses Christian Haider's pdf4smalltalk to build report output using a Seaside influenced coding style. A report with a header, text and footer would be coded as...


| report |
report := PRwReport new.
report portrait.
report page: [:page |
page header string: 'This is a header'.
page text string: self someText.
page footer string: 'This is a footer'].
report saveAndShowAs: 'TestText.pdf'.


The tool supports the usual report output options, like fonts, alignment, tables, images, bullets and so on.

I've built a couple of other Smalltalk report frameworks over the years. One used Crystal Reports for the layout with configurable data gathering, and another (much better tool) that used Totally Object's Visibility (I did a presentation on that one at Smalltalk Solutions 2004). Both of those used a data + layout spec model, which, with the benefit of hindsight, was not be best choice. It was a challenge to keep the code and layout in sync. Maintenance was painful. For PDF Report I opted for Seaside's 'paint the content on a canvas' pattern. It is working nicely (I'll be presenting the details at Smalltalk Industry Conference in Biloxi). 

Here's the part that got me thinking about how nice objects are and the Law of Demeter...  when building a report output, you have to deal with coordinating the size and position of layout components on the page. Do you give the responsibility to the page, or do you have the layout objects find their own place? I opted for a 'builder': it knows how much space is available on a page and which layout objects need to be processed.

The interesting part was in deciding how much the builder needed to know about each layout. The first few iterations were rudimentary: each layout had a calculated height (word wrapped text with a selected font) and the builder would output as much as would fit on one page, then trigger a page break and continue on to the next page.

But that did not work with tables, since each row could have some cells that spanned pages. The builder could not blindly trigger a page break on a tall cell, since the next cell would be on the previous page. The table, row and cell had to communicate layout information to the builder, with the cell width and height dependent on neighbouring cells. And, to make things especially interesting, tables can be nested and cells can span rows and columns, like this...

...and this...



Each time I added a new layout mix to the SUnit tests I had to rethink what each object knew. After several iterations a pattern emerged: the less the builder knew about the layout objects, the better. And as the builder got dumber, its code got simpler and new layout mixes just worked. A tricky part was in sequencing the layout calculation for nested objects: a cell's height is dependent on the row's height, but the row's height is the maximum of it's cell heights.

Once the calculation sequence was correct, each layout object was able to answer it's layout values: position, margin and padding. The builder could ask if a layout could fit in the remaining space on a page without knowing what the layout objects was (text, table, bullet, image or line) and could create a new physical page without knowing how a layout object would be split. Now each refactoring cycle starts with me asking myself: how can I reduce what the builder needs to know? The latest version is much cleaner than the first. It's nice to apply well known object design rules and see real results.

Still a lot of work to do, but I'm looking forward to showing it at the conference. And, if it's good enough, it will be added to the VW public store (long term plans are to port to other Smalltalk dialects).

Simple things should be simple. Complex things should be possible.

Tuesday, 15 November 2011

Smalltalk head space

At the last Toronto Smalltalk User Group meeting we had a basic introduction to Seaside. It had been requested by some of the members, especially by the two profs that sponsor our group at Ryerson University. They teach an OO course with a Smalltalk component and felt it would be of interest to their students. A few did show up and the feedback was very positive. A bit surprising, given how basic the demo was (here is a link to the PDF), but it got me thinking about how easy it is to get out of touch with how other people see things. There is something to be said about baby steps.

After the talk we spent a while talking about the sad sate of computer science education; how it's become a C syntax programming training school. Way back in my day (early 80's) there was no dominant computer language. Intro to comp. sci. was taught in FORTAN. We learned BASIC, PL/I, COBOL, SNOBOL, Lisp, 360 Assembler, Prolog and Smalltalk. My OS course was taught as a history lesson, explaining why and how each OS concept was introduced, and why some were abandoned. There was a big focus on 'why', not just 'how'. That does not seem to be the case today.

During the conversation, I talked a bit about how selling Smalltalk can be a challenge. Smalltalk is different. If all you know is C and Java, it's really different. Why learn something different if it will not help you get a job? I made the case that learning Smalltalk helps you learn how to think in objects. A perspective that may be helpful when dealing with the other object hybrid languages (it would also be good to learn Lisp, Prolog and assembler). You can't help thinking that a Java-centric education gives students only one tool: a hammer, and they then view every task as a nail.

But how do you communicate the value of a "Smalltalk head space"? A world where everything is a live object that you can see and change. A place where delegating behaviour to Integer and String makes perfect sense; where the language is minimalist and the libraries add the complexity; where working with a debugger feels natural. And how do you explain how sending messages is different than calling a function? The mechanics are simple enough, but the mental model is harder.

I see functions as a handing off of total control of the world to something else. Within a function you change whatever data you need to. The world is a large, porous bit universe that you manipulate. Whereas sending a message is asking another object to do something for you. You don't think about tickling the bits of another object. You do your job, send messages to other objects, and maybe answer something. You work in a  local scope, not the whole world.

Every professional Smalltalk developer I know can tell talk about their "aha" moment, the point where object-think made sense. In my case it was a course at The Object People where Paul White was explaining how to model a transfer between a chequing and savings account: asking either account to do the transfer smelled wrong, so instead you could model the transaction. Really? You can do that? A 'transction' can be an object? Cool. A more whimsical example is modelling the milking of a cow. Do you ask the cow to milk itself? Do you ask the milk to un-cow itself? No. You model a farmer to do the milking 'transaction'.

The problem is that those "aha" moments take time to learn. And who has time for that? As Dave Thomas said in his recent SPLASH video interview, the complexity that we've added to the development tools is not always a benefit to someone building a simple application. Do you really need to know about objects if a BASIC program will do? Is learning about classes and instances to big a hurdle? Would a prototype object model be easier to learn? I don't know, but we really have to find a better way to communicate the up side of using Smalltalk. Waiting for each student to experience their own "aha" moment is not good enough.

Simple things should be simple. Complex things should be possible.

Monday, 12 September 2011

XML RESTful Seaside interface to legacy system

I'm mandated with creating web interfaces to two legacy frameworks, one written in VA and the other in VW.

For the VW framework I started with a full application implementation; a Seaside interface that duplicated the entire application interface by parsing VW window specs and building Seaside components. But for both the VA and VW systems there is also a need for a 'portal' web site, one that exposes a limited view of the application but is intended for a larger user base.

Both systems were designed over a dozen years ago, using GemStone and a fat client. Grafting a multi-user Seaside interface to the fat client designed for a singe user was not going to work. That's why the VW 'full user' implementation uses one 200MB VW image per Seaside session, a deployment model that was not an option for the portal: too many users (a resource issue), too much exposed (a security concern), too integrated (a maintenance problem).

To work around these constraints I've implemented an XML based domain interface, where a Seaside image can gather the data it needs to render using a RESTful interface to GemStone. There is no domain model in the Seaside image, just components that know what part of the domain they represent. With GemStone this works particularly well since it eliminates the need to fault objects into the client. It's a common technique with GS, popularised by James Foster, to build strings on GS, fault them to the client, then parse and display the content.  Some displays, like lists of objects, can improve their performance from tens of seconds to sub-second.

Each Seaside component knows which dedicated server method to use. Since the portals have a limited scope there are not that many XML data methods. I would not use this approach if I expected the portal application scope to increase significantly.

All of this was made easier by thinking of the final rendering as a limited set of display patterns: lists, tables, trees, image and a single domain object. The XML has tags that identify the display pattern: <list> <table> <tree> 



Building the XML on the server is done with a 'canvas' object which wraps tags around indented nested content. Formatting of the XML content is a debugging convenience.

To add a domain object...


aCanvas 
addDomain: anItem 
text: anItem product id
with: [
aCanvas add: 'description' put: anItem product description.

...


To add a list...

aCanvas 
addListNode: productSet
type: 'productSet'
with: [
collection do: [:each | self buildXmlProductSet: each on: aCanvas]].


For both apps a user identifier, in the form of a GS object oop, is included in each data request. Seaside stores the user object oop in the WASession subclass instance variable. Communication between the Seaside image and GS is behind a firewall, so I'm not that concerned about it being monitored.

RESTful communication works well with my Smalltalk message passing sensibilities. Objects passing messages feels so natural, and debugging is easy since I can record and replay any message. And I don't care what gets changed on the server, as long as the Seaside XML methods answer the same way.

Although the code is written in both a VW and VA client, it's dialect agnostic; they could be swapped. I stuck with the two Smalltalks because I was replacing a domain model Seaside implementation in each, and there was enough sunk cost to leave that code as is.

Ideally I would like to host the Seaside sessions from GS with a GLASS deployment.  It's a long term option with the VW application, but the VA application has to be hosted on a Windows server, which limits us to 32 bit GS and no GLASS.

So far testing is going well. The next step is to test this under load and see how many concurrent Seaside sessions one image can handle. We already have a multi-Seaside image dispatcher implementation working with Apache that supports session affinity, so I'm confident that scaling will not be a problem.

Simple things should be simple. Complex things should be possible.

Wednesday, 1 June 2011

Low-tech Seaside graphs

I really like the basic 'how to' tutorials people post, whether they are about Seaside, C#, VisualStudio or motorcycle maintenance (that pretty much covers where my head has been lately).

So here are a couple of things I've done in Seaside that I think are simple yet proved handy.

The first example is from a home financial application that I wrote a few years ago as a way to learn Seaside. My wife and I use it to track our budget and daily transactions, which it imports from a scheduled download done by an iMacro script.

I wanted a bar graph to show how much of the budget for a category was spent for the month. And I wanted parts of the graph to support drilling down for more details. Here is what I ended up with (with some random data)...

...each square is one day of the month. The narrow grey line is to the right of today. If spending in a category is running ahead of the budget, it's red.

The graph is created as a two column table, with the title and the progress bar. The bar is created by rendering same sized (16x16) image buttons, one for each day, and a narrow divider button (3x16) for 'today'.

html imageButton
callback: [self openTransactions];
title: 'Show transactions';
url: TxFileLibrary / #progressredPng

The images were created with Paint.net, a handy tool for simple graphics.

I used a similar approach to create a Kanban graph for an issue tracking system. In this case, a table with fixed sized cells is filled with image buttons representing an issue, to a maximum of five rows. Selecting a cell shows the list of issues for that cell, and the title of each cell shows the display string of the corresponding issue.

I think it's cool what you can due with some dynamic HTML table generation and a few PNG files. 

Tuesday, 3 May 2011

Pushing Smalltalk

The next meeting of the Toronto Smalltalk User Group is May 9 with Don MacQueen talking about JWARS.
See the web site for details.

I've been involved with the Toronto Smalltalk User Group for 20 years now, the past dozen or so as the primary organizer. Lately we've had a few people show up that were new to Smalltalk and wanted to learn more. We talk to them about the simple syntax, show them Pharo and Seaside, tell them about the other dialects, and try whatever we can to get them enthused.

But there is one core strength that I value which I cannot easily demo: the lack of brick walls.

When working with tools like MS's Visual Studio, I'm constantly frustrated by the lack of universal object inspection and 'down to primitive' code tracking. Coding something new and with unfamiliar APIs gets a bit messy. You don't always know what you need until you trip over its absence. I find myself adding diagnostic code to show me stuff, which in Smalltalk I'd just debug and inspect. And sometimes in VS you just can't get what you want; you hit a brick wall. If it were not for Google and people posting esoteric workarounds, I don't know how I'd get anything done. And I love the examples that say "don't forget the comma, or else it won't work"... no error message, no warning, just no output.

Thing is, that's not the kind of thing you appreciate until you try something complex, which does not happen to a new user. I wonder if we would benefit from having code examples with pre-defined bugs, written in various languages, and then using them to show how you'd debug the problem. Show just how few barriers we have in Smalltalk to understanding what is going on in our code (and, more importantly, in other people's code).

I used to tell people that I saw a lot of similarity between programming in MVS 360 assembler and Smalltalk. For me, transparency was the big stick that I could use to whack the problem. From what I've seen of other IDEs (an admittedly short list), Smalltalk still does that the best.