The project I've been working on since May, 2012, went live September 14, 2015. It's an ERP system for a sales company, which specializes in industrial HVAC rep sales (where you 'represent' the manufacturer). It is nice to announce the deployment of a 100% Smalltalk application, built with VisualWorks, GemStone and Seaside.
Our users are happy, mostly. They want more features, and they want them sooner than later. Not a bad place to be.
Personally, it's been both a rewarding & frustrating project. Rewarding because I get to work for a far-sighted company that sees the value of a custom application, and can deal with the risks of using a niche technology. Frustrating because it could have been done better (which, I suspect, is true of just about any project).
The past couple of years have been a head down, ignore everything else, focused effort. We've done some interesting things, many of which I had hoped to share, but there never seems to be any time; work takes it all.
So, with the benefit of hindsight, here is what I've learned...
Have a project champion
Our project champion is one of the founders of the company. Without him risks would not have been taken, and the project would not have happened. We replaced a 20+ year old custom system that was also championed by the same person. He, and the company, believe that a custom ERP provides a competitive advantage. The old software proved it, which made selling the idea of a new system easier.
Smalltalk productivity rocks
Total effort was about 16 to 18 person years (our team size varied from 3 to 5 over 3.5 years). Compare that with effort to deploy something like SAP, and we look good. Our team's productivity will really shine as new features and customizations are rolled out over the next couple of years.
Expect a long tail of trivial things
What really stands out is how much time we spent (and continue to spend) on the little things. It tends to be boring, almost clerical work. But it's what users notice. Font sizes, colours, navigation sequences, default values, business rule adjustment... nothing intellectually challenging.
The beginning of the project was fun: figuring out how to use thousands of VW window specs in a Seaside application, including modal dialogs and dynamically morphing views. Finding ways to hold complex updates prior to a Save / Cancel decision. Building a new report framework that allows for edits and generates PDF content. Adding application permissions. Implementing a RESTful web-to-GS mechanism. And so on. All good stuff, but mostly done.
Pay your technical debt early
Looking back, we would have been better off not trying to preserve the ecosystem of the old fat client framework (the idea was to keep most of the domain code as is). Instead, we should have started from scratch, using the old system as a spec. The old framework was garbage. We knew it, but thought the technical debt could be managed. It was, but at a cost. We now know where we spent our time; it's evident switching to new code earlier would have allowed us to be deployed earlier.
If you see garbage code, be ruthless and get rid of it. Bad code is like a bed bug: it will keep biting until properly exterminated.
Show progress
Users need to see progress. And developers need feedback. We hit the jackpot with our beta users: they were willing to put up with a lot of early unfinished code. It gave them a view of what was to come, and they communicated that to the rest of the company. Our project dragged on a bit, but they saw progress, which made the delays palatable.
Have clear metrics
It's so easy to get caught up in the moment, and to work on what is of interest right now, because that is what users see. But if you do that, you'll forget about the long term, and the important internal stuff just won't get done. If developers are not measured on the long term deliverables, there is little incentive to work on them.
Make long term metrics just as visible as short term ones. Break them down and make them part of each iteration, even if they are obscure and of no interest to the end users. It will be frustrating, You'll get asked "why are you working on that and not the feature I'm waiting for?". But they'll be far more aggravated if the application is not reliable. It's like backups: you don't notice their absence until you need them. Be sure they see the value of the boring internal tech stuff.
Use agile development
We release a new production version every two weeks, with minor changes published twice each week. Developers merge their code every couple of hours. All new code is expected to have an SUnit test. The full test suite is run each night with Jenkins and keeping test green is the first developer priority. We pair up for tricky problems. Refactoring is considered to be a 'technical investment'. All changes are tracked (we have a nifty issue management tool).
Reviewing the process is part of the process. We adjust things almost every week. It's not easy, but getting to a smooth productive rhythm is so worth it.
What next? Mobile web interfaces, an Android app (we can do that with Pharo once a VM is ready), moving to a GS Seaside interface (need to port PDF4Smalltalk to GS), and a lots of small stuff.
Simple things should be simple. Complex things should be possible. - Alan Key
Showing posts with label GemStone. Show all posts
Showing posts with label GemStone. Show all posts
Sunday, 3 January 2016
Thursday, 13 November 2014
GemStone based reports & views
My current project is a port of a VisualWorks & GemStone fat client application to Seaside. Part of the porting effort was to map a few thousand VW windowSpec views to a Seaside web views. It all works fine, but it's not ideal. Ported fixed layout views look like fat client windows; they lack a web aesthetic. We want all new views to be more 'web-centric', where positioning and sizing is adjusted by the browser, especially from tablets and mobile devices.
We also need to provide reports. For a web app, answering a PDF for a report works well.
We combined these two requirements and ended up with reports generated on GS using a Seaside-like coding pattern, which is then rendered in by Seaside in VW, and can be viewed as a PDF.
To build the reports we use Report4PDF, something I wrote a few years ago. It uses PDF4Smalltalk to generate a PDF document. PDF4Smalltalk has not been ported to GemStone, something I'd like to do when time allows (and to VA & Pharo). Fortunately, Report4PDF generates intermediate output before requiring PDF4Smalltalk. This output can be created on GS, which is then moved to VW, where PDF4Smalltalk is used for the final output.
Our VW to GS interface uses only strings, either XML or evaluated command strings. In this case, the report objects are packaged as XML, and then recreated on VW. For most reports building and parsing the content takes about 200ms (we may move this to a command string, which is typically a third faster).
Once the report is in VW we use a 'report component' for the rendering, which reads the report content and builds the Seaside output. Because Report4PDF has a Seaside-like coding style, the mapping is relatively simple.
For example, a table is defined as...
aTable row: [:row |
row cell: [:cell | cell widthPercent: 20. cell text bold; string: 'Job'].
row cell: [:cell | cell widthPercent: 30. cell text; string: self job description].
row cell: [:cell | cell widthPercent: 20. cell text bold; string: 'Our Job ID'].
row cell: [:cell | cell widthPercent: 30. cell text; string: self job id]].
...and gets rendered as...
...the PDF output is...
...to build the PDF content we use the data already in VW. No additional GS call is needed.
R4PObject, the root Report4PDF class, has a #properties instance variable to support extensions. We use this to add link and update capabilities to the report when it is rendered in Seaside.
For example, a link to another domain object is coded as...
row cell right bold string: 'Designer'.
row cell text normal string linkOop: self designer domainOop; string: self designer displayKeyString.
...and displayed as...
...but is ignored in the PDF output...
The beauty of this approach is that all of the report generation is done on GemStone, with generic rendering and PDF generation in our VW Seaside code.
Our users are happy with this approach. They like the look of the web rendered report and the option to get the content as a PDF. Having link and simple update capabilities means that most users will not need to use the old fat clients views, which tend to be used by power users, for data entry and for detailed updates.
Simple things should be simple. Complex things should be possible. - Alan Key
We also need to provide reports. For a web app, answering a PDF for a report works well.
We combined these two requirements and ended up with reports generated on GS using a Seaside-like coding pattern, which is then rendered in by Seaside in VW, and can be viewed as a PDF.
To build the reports we use Report4PDF, something I wrote a few years ago. It uses PDF4Smalltalk to generate a PDF document. PDF4Smalltalk has not been ported to GemStone, something I'd like to do when time allows (and to VA & Pharo). Fortunately, Report4PDF generates intermediate output before requiring PDF4Smalltalk. This output can be created on GS, which is then moved to VW, where PDF4Smalltalk is used for the final output.
Our VW to GS interface uses only strings, either XML or evaluated command strings. In this case, the report objects are packaged as XML, and then recreated on VW. For most reports building and parsing the content takes about 200ms (we may move this to a command string, which is typically a third faster).
Once the report is in VW we use a 'report component' for the rendering, which reads the report content and builds the Seaside output. Because Report4PDF has a Seaside-like coding style, the mapping is relatively simple.
For example, a table is defined as...
aTable row: [:row |
row cell: [:cell | cell widthPercent: 20. cell text bold; string: 'Job'].
row cell: [:cell | cell widthPercent: 30. cell text; string: self job description].
row cell: [:cell | cell widthPercent: 20. cell text bold; string: 'Our Job ID'].
row cell: [:cell | cell widthPercent: 30. cell text; string: self job id]].
...and gets rendered as...
...the PDF output is...
...to build the PDF content we use the data already in VW. No additional GS call is needed.
R4PObject, the root Report4PDF class, has a #properties instance variable to support extensions. We use this to add link and update capabilities to the report when it is rendered in Seaside.
For example, a link to another domain object is coded as...
row cell right bold string: 'Designer'.
row cell text normal string linkOop: self designer domainOop; string: self designer displayKeyString.
...and displayed as...
...but is ignored in the PDF output...
Our users are happy with this approach. They like the look of the web rendered report and the option to get the content as a PDF. Having link and simple update capabilities means that most users will not need to use the old fat clients views, which tend to be used by power users, for data entry and for detailed updates.
Simple things should be simple. Complex things should be possible. - Alan Key
Thursday, 22 May 2014
Smalltalk performance measurement
The application I'm working on uses VisualWorks and GemStone. As we've built out our application and loaded more test data we find ourselves spending more time turning performance. If there is one thing I've learned over the years that is performance problems are never what they seem: always measure before you change the code. Premature optimization makes your code ugly and, more likely than not, adds no value.
On VW we use TimeProfiler & KaiProfiler, and on GS we use ProfMonitor. All are useful for getting a sense of where to look, after which we switch to more basic tools, like...
Time>>millisecondsToRun:, along with some convenience methods.
In VW you can use the Transcript to show performance measurements.
You could write something like...
Transcript cr; show: 'tag for this code'; show: (Time millisecondsToRun: [some code]) printString.
...but that's a pain. And you'd need the 'tag for this code' if you have several measurements spread throughout the code. To make that easier, we use...
'tag for this code' echoTime: [some code]
...which is implemented as...
echoTime: aBlock
| result microseconds |
microseconds := Time microsecondsToRun: [result := aBlock value].
self echo: microseconds displayMicroseconds.
^result
...the #echo: method is commented on in a previous post and #displayMicroseconds is just...
Integer>>displayMicroseconds
self > 1000 ifTrue: [^(self // 1000) displayTime].
^self printString, 'µs'
...and displayTime shows hh:mm:ss.mmm with hh and mm displayed if needed.
In GS you could use the VW transcript with a client forwarder, but our application uses a simplified GS interface model with only one forwarder and no replication (an XML string is the only returned value from GS), so adding a client forwarder was something I did not want to do. I also wanted to run some tests from a Topaz script.
Instead of a Transcript I use a String WriteStream held in a GS session variable and use Time class methods to add measurements and show the results. To measure a block of code, we add nested time measurements with...
Time log: 'tag for this code' run: [some code]
...and we wrap the top method send with...
Time showLog: [top method]
...some methods get called a lot, so we'd like a total time. For that we use...
Time sum: 'tag for this method' run: [some code]
...because each Time method answers the block result we can insert the code easily...
someValue := self bigMethod
...vs...
someValue := Time log: 'bigMethod' run: [self bigMethod]
These are the methods...
Time>>showLog: aBlock
self timeSumDictionary: Dictionary new.
self timeLogStream: String new writeStream.
self timeLogStream nextPutAll: 'Time...'.
self log: 'time' run: aBlock.
^self timeLogStream contents , self displayTimeSums
...each time* variable is stored in the GS session array, like...
timeLogStream
^System __sessionStateAt: 77
timeLogStream: anObject
System __sessionStateAt: 77 put: anObject
log: aMessage run: aBlock
"Time showLog: [Time log: 'test' run: [(Delay forSeconds: 1) wait] ]"
| result microseconds |
microseconds := self millisecondsToRun: [result := aBlock value].
self timeLogStreamAt: aMessage put: microseconds.
^result
Simple things should be simple. Complex things should be possible.
On VW we use TimeProfiler & KaiProfiler, and on GS we use ProfMonitor. All are useful for getting a sense of where to look, after which we switch to more basic tools, like...
Time>>millisecondsToRun:, along with some convenience methods.
In VW you can use the Transcript to show performance measurements.
You could write something like...
Transcript cr; show: 'tag for this code'; show: (Time millisecondsToRun: [some code]) printString.
...but that's a pain. And you'd need the 'tag for this code' if you have several measurements spread throughout the code. To make that easier, we use...
'tag for this code' echoTime: [some code]
...which is implemented as...
echoTime: aBlock
| result microseconds |
microseconds := Time microsecondsToRun: [result := aBlock value].
self echo: microseconds displayMicroseconds.
^result
...the #echo: method is commented on in a previous post and #displayMicroseconds is just...
Integer>>displayMicroseconds
self > 1000 ifTrue: [^(self // 1000) displayTime].
^self printString, 'µs'
...and displayTime shows hh:mm:ss.mmm with hh and mm displayed if needed.
In GS you could use the VW transcript with a client forwarder, but our application uses a simplified GS interface model with only one forwarder and no replication (an XML string is the only returned value from GS), so adding a client forwarder was something I did not want to do. I also wanted to run some tests from a Topaz script.
Instead of a Transcript I use a String WriteStream held in a GS session variable and use Time class methods to add measurements and show the results. To measure a block of code, we add nested time measurements with...
Time log: 'tag for this code' run: [some code]
...and we wrap the top method send with...
Time showLog: [top method]
...some methods get called a lot, so we'd like a total time. For that we use...
Time sum: 'tag for this method' run: [some code]
...because each Time method answers the block result we can insert the code easily...
someValue := self bigMethod
...vs...
someValue := Time log: 'bigMethod' run: [self bigMethod]
These are the methods...
Time>>showLog: aBlock
self timeSumDictionary: Dictionary new.
self timeLogStream: String new writeStream.
self timeLogStream nextPutAll: 'Time...'.
self log: 'time' run: aBlock.
^self timeLogStream contents , self displayTimeSums
...each time* variable is stored in the GS session array, like...
timeLogStream
^System __sessionStateAt: 77
timeLogStream: anObject
System __sessionStateAt: 77 put: anObject
log: aMessage run: aBlock
"Time showLog: [Time log: 'test' run: [(Delay forSeconds: 1) wait] ]"
| result microseconds |
microseconds := self millisecondsToRun: [result := aBlock value].
self timeLogStreamAt: aMessage put: microseconds.
^result
timeLogStreamAt: aMessage put: anInteger
| stream |
stream := self timeLogStream.
stream isNil ifTrue: [
stream := String new writeStream.
self timeLogStream: stream].
stream
cr; nextPutAll: aMessage; tab;
nextPutAll: anInteger displayTime.
self timeSumDictionaryAt: aMessage add: anInteger.
sum: aMessage run: aBlock
"Time showLog: [
Time sum: 'test' run: [(Delay forSeconds: 1) wait].
Time sum: 'test' run: [(Delay forSeconds: 1) wait]]"
| result milliseconds |
milliseconds := self millisecondsToRun: [result := aBlock value].
self timeSumDictionaryAt: aMessage add: milliseconds.
^result
timeSumDictionaryAt: aKey add: aValue
| dictionary total |
dictionary := self timeSumDictionary.
dictionary isNil ifTrue: [
dictionary := Dictionary new.
self timeSumDictionary: dictionary].
total := dictionary at: aKey ifAbsent: [0].
total := total + aValue.
dictionary at: aKey put: total.
Simple things should be simple. Complex things should be possible.
Thursday, 9 May 2013
Agile Ditch Digging
I'm a strong believer in agile software development. Every project I've worked on evolved into something every different from what it started as. Agile believes in trapping that type of change; it believes in understanding that you will know more as you build; agile deals with "managing ignorance".
On our current project we are migrating from a VW fat client to a Seaside application with short GS transactions that uses the legacy meta model, UI layouts and data structures. Our design isolates the UI layer in the VW Seaside server and the domain layer in GS. We parse the VW window spec into Seaside components and communicate with GS using RESTful data calls (nested arrays of strings and oop numbers), and get XML back.
All of that works well: it's quick, looks nice, scales and is much easier to wrap SUnit tests around. And it was built using agile development techniques. Mostly.
I find that agile works best in the 'construction' side of the work, where you can define the user stories and measure the pace of delivery (the ditch digging). There is, however, another flavour of software development, the R&D or 'creative' side, like designing the framework and tools that the application code rests on. It's not something the user sees; it's just part of the application's fabric.
Recently we thought about how we were dealing with widget level feedback. That's where you enter 'abc' in the 'name' field and get feedback when you move on to another field. If 'abc' is an invalid value, it would be useful to see that right then ('on blur'; when the widget loses focus), instead of waiting until the 'save' button is pressed. The same goes for updating depending values: if the 'comment' defaults to 'name', having it change when 'name' is entered is useful.
Our original approach was to do this behaviour in the web code, since it had access to the display components. We soon realized that is was more important that the code have access to the domain and be able to reuse the legacy validation code, so we moved the logic to GS. Seemed simple enough.
Turned out that shifting that one responsibility triggered a lot of framework redesign. Originally, the web component built up the set of changes, and passed them to GS on 'save'. It was simple and worked. With the field level code on GS, each 'on blur' event had to trigger a GS call and, more importantly, had to package the full view update state into call so the domain code would see the current displayed state . Performance is not an issue, since each call takes from 10 to 50 ms, but the code change was more complex than it originally seemed.
I could find no good way to communicate the status of this 'big bang' change. The problem was complex, then things got delayed due to other work, and we made some critical design changes as we understood the technical issues better. None of that was well communicated. From the outside looking in, the project just stalled. Precisely the kind of optics you don't want, and the kind of problem that agile techniques are supposed to deal with.
I simply do not know how to measure 'thinking time', especially my own. Finding a solution may take me five minutes, or five hours.
It was interesting to feel the pendulum swing from 'creative' to 'construction' as the work progressed; the construction phase is so much easier. Easier to do, easier to measure and easier to manage. Everyone is more at ease when you can show that you're 80% done, vs. just telling them that "you're close".
Digging a ditch is easy, assuming you know how.
Simple things should be simple. Complex things should be possible.
On our current project we are migrating from a VW fat client to a Seaside application with short GS transactions that uses the legacy meta model, UI layouts and data structures. Our design isolates the UI layer in the VW Seaside server and the domain layer in GS. We parse the VW window spec into Seaside components and communicate with GS using RESTful data calls (nested arrays of strings and oop numbers), and get XML back.
All of that works well: it's quick, looks nice, scales and is much easier to wrap SUnit tests around. And it was built using agile development techniques. Mostly.
I find that agile works best in the 'construction' side of the work, where you can define the user stories and measure the pace of delivery (the ditch digging). There is, however, another flavour of software development, the R&D or 'creative' side, like designing the framework and tools that the application code rests on. It's not something the user sees; it's just part of the application's fabric.
Recently we thought about how we were dealing with widget level feedback. That's where you enter 'abc' in the 'name' field and get feedback when you move on to another field. If 'abc' is an invalid value, it would be useful to see that right then ('on blur'; when the widget loses focus), instead of waiting until the 'save' button is pressed. The same goes for updating depending values: if the 'comment' defaults to 'name', having it change when 'name' is entered is useful.
Our original approach was to do this behaviour in the web code, since it had access to the display components. We soon realized that is was more important that the code have access to the domain and be able to reuse the legacy validation code, so we moved the logic to GS. Seemed simple enough.
Turned out that shifting that one responsibility triggered a lot of framework redesign. Originally, the web component built up the set of changes, and passed them to GS on 'save'. It was simple and worked. With the field level code on GS, each 'on blur' event had to trigger a GS call and, more importantly, had to package the full view update state into call so the domain code would see the current displayed state . Performance is not an issue, since each call takes from 10 to 50 ms, but the code change was more complex than it originally seemed.
I could find no good way to communicate the status of this 'big bang' change. The problem was complex, then things got delayed due to other work, and we made some critical design changes as we understood the technical issues better. None of that was well communicated. From the outside looking in, the project just stalled. Precisely the kind of optics you don't want, and the kind of problem that agile techniques are supposed to deal with.
I simply do not know how to measure 'thinking time', especially my own. Finding a solution may take me five minutes, or five hours.
It was interesting to feel the pendulum swing from 'creative' to 'construction' as the work progressed; the construction phase is so much easier. Easier to do, easier to measure and easier to manage. Everyone is more at ease when you can show that you're 80% done, vs. just telling them that "you're close".
Digging a ditch is easy, assuming you know how.
Simple things should be simple. Complex things should be possible.
Sunday, 24 February 2013
RESTful GemStone with multiple sessions
The project I'm working on is moving to a full web deployment from a classic VW fat client + GS. Our layers are now Seaside on VW and GemStone. To allow for this migration, we've done interesting work with VW window specs and domain meta data, which I'll comment on in future posts. Here I'll go over how we are interfacing to GemStone from our Seaside server.
Our goal is to have a snappy application, so we view Seaside as a simple presentation layer. It contains no domain classes, only view components with widgets that are coded to know which attribute they display. Getting data from GS is done by packaging the list of displayed attributes, along with the oop of the displayed object, and sending it to GS. GS answers an XML string, which is parsed into 'domain node' objects (generic data holders) and then used by the display.
xmlDomainNodeOop:attributes:collectionOop: #(6944141057 #('comment') 656304641)
<domain>
<oop>6944141057</oop>
<objectClassName>BIDcustomer</objectClassName>
...
<domain>
<objectClassName>ByteString</objectClassName>
<text>comment</text>
... <value>a comment string</value>
</domain>
We develop in a full VW client, so debugging is done by sending the #xmlDomainNodeOop:attributes:collectionOop: method in a workspace. Very handy when trying to recreate a user problem.
If you've ever worked with GemStone (and, if you're programming in Smalltalk, my sympathies if you have not), you would be familiar with the GBS interface, the magical code which keeps objects in sync between the client and the server. You can choose to have methods execute where they make most sense (like UI heavy methods on the client and big data footprint methods on the sever) and know that your objects are in the correct state in both environments. Very cool.
With our oop + attributes question and XML answer, we don't make use of that GBS feature. In fact, we'd prefer a way to turn it off. Since each request and answer pair is independent, there is no need for session state and we can run each Seaside server with multiple sessions, using round robin dispatching.
Doing that introduces a few technical curiosities (and a huge thanks to Martin McClure for his help in fixing them). First, you can't share a class connector between GS sessions. In the single session model, I had a thin 'client dispatcher' class that forwarded class messages to server class that interfaced with the domain model and provided tools for building the XML answer. For multiple sessions, I had to link to an instance, so both class became singletons and each session defined its own connector.
Next, we tripped over the fact that you can't share objects between sessions. That makes sense in a GBS model, since syncing the objects is a session responsibly. But we were not passing domain objects and each request and answer were done with new Array and String instances. Turns out the problem was a method with an attribute parameter coded as #( ('value') ). Our parameter copy was not deep enough, so each tried to connect the nested array and we ended up with the error...
The session round robin mechanism is done with our on GS session wrapper. A class collection of session instances is rotated through, with the 'next session' pointer incremented after each use. If a session semaphore is busy, we skip to the next one. If they're all busy, we wait on the one we started with. We've been testing with three sessions, and since most of our GS session access takes less than 50ms, it's very rare that we a delay due to a busy session. We still have the occasional outlier (like the five seconds in this sample), but most of those are due to GS faulting pages in our test environment. We expect our production server to have everything cached.
It will be interesting to see how well we scale. Our system uses a dispatcher image to direct Seaside sessions to GS images, so we can adjust to load by increasing the number of Seaside images, and the number of GS sessions each image can support.
So far this technology mix is working well for us, and we're getting very positive feedback from our users.
Simple things should be simple. Complex things should be possible.
Our goal is to have a snappy application, so we view Seaside as a simple presentation layer. It contains no domain classes, only view components with widgets that are coded to know which attribute they display. Getting data from GS is done by packaging the list of displayed attributes, along with the oop of the displayed object, and sending it to GS. GS answers an XML string, which is parsed into 'domain node' objects (generic data holders) and then used by the display.
xmlDomainNodeOop:attributes:collectionOop: #(6944141057 #('comment') 656304641)
<domain>
<oop>6944141057</oop>
<objectClassName>BIDcustomer</objectClassName>
...
<domain>
<objectClassName>ByteString</objectClassName>
<text>comment</text>
... <value>a comment string</value>
</domain>
We develop in a full VW client, so debugging is done by sending the #xmlDomainNodeOop:attributes:collectionOop: method in a workspace. Very handy when trying to recreate a user problem.
If you've ever worked with GemStone (and, if you're programming in Smalltalk, my sympathies if you have not), you would be familiar with the GBS interface, the magical code which keeps objects in sync between the client and the server. You can choose to have methods execute where they make most sense (like UI heavy methods on the client and big data footprint methods on the sever) and know that your objects are in the correct state in both environments. Very cool.
With our oop + attributes question and XML answer, we don't make use of that GBS feature. In fact, we'd prefer a way to turn it off. Since each request and answer pair is independent, there is no need for session state and we can run each Seaside server with multiple sessions, using round robin dispatching.
Doing that introduces a few technical curiosities (and a huge thanks to Martin McClure for his help in fixing them). First, you can't share a class connector between GS sessions. In the single session model, I had a thin 'client dispatcher' class that forwarded class messages to server class that interfaced with the domain model and provided tools for building the XML answer. For multiple sessions, I had to link to an instance, so both class became singletons and each session defined its own connector.
Next, we tripped over the fact that you can't share objects between sessions. That makes sense in a GBS model, since syncing the objects is a session responsibly. But we were not passing domain objects and each request and answer were done with new Array and String instances. Turns out the problem was a method with an attribute parameter coded as #( ('value') ). Our parameter copy was not deep enough, so each tried to connect the nested array and we ended up with the error...
Attempt to associate a Array with more than one GemStone sessionReplacing the parameter with 'Array with: (Array with: 'value')) fixed the problem. We've since added our own deepCopy extensions to prevent these types of problems in the future.
The session round robin mechanism is done with our on GS session wrapper. A class collection of session instances is rotated through, with the 'next session' pointer incremented after each use. If a session semaphore is busy, we skip to the next one. If they're all busy, we wait on the one we started with. We've been testing with three sessions, and since most of our GS session access takes less than 50ms, it's very rare that we a delay due to a busy session. We still have the occasional outlier (like the five seconds in this sample), but most of those are due to GS faulting pages in our test environment. We expect our production server to have everything cached.
It will be interesting to see how well we scale. Our system uses a dispatcher image to direct Seaside sessions to GS images, so we can adjust to load by increasing the number of Seaside images, and the number of GS sessions each image can support.
So far this technology mix is working well for us, and we're getting very positive feedback from our users.
Simple things should be simple. Complex things should be possible.
Subscribe to:
Posts (Atom)