I'm most cases my SUnit tests check for specific values, your typical...
self assert: anObject value = 'what I expect'...But there are a couple of projects that I've worked on where checking for a state of an entire object was helpful. These are not typical SUnit assertions, since creating the test ahead of time is not practical. Instead, they are consistency tests: making sure that the state of an object has not changed.
The first project was Report4PDF, a simple VW reporting framework that uses PDF4Smalltalk (mailing list)
Adding tests for simple actions didn't add much value, since the challenge of a report tool is getting all the layout definitions to work well together; it's the net result that mattered, not the individual outputs. Those were well tested in the PDF4Smalltalk SUnit tests.
For Report4PDF, after I manually checked a report I wanted to make sure that the output did not change. As more complex reports were added, the simple reports acted as the regression tests. Edge cases were the most interesting, and most of those were found in real world use. I simply did not have the imagination to create the strange scenarios found in the wild. So, an anomaly would surface in production, I'd build a test report that had the same problem, fix it, and add the corrected report check to the test suite.
Stored data consists of both a diagnostic display string and a byte array of the rendered PDF document. The diagnostic string represents the low level data sent to PDF4Smalltalk and rarely needs to be updated. The PDF byte array needs to be rebuilt each time a material change is made to PDF4Smalltalk.
Report4PDF tests are in the Report4PDF-test package and coded in R4PReportTest. Reports methods are prefixed with 'example', like...
exampleAlignCenter
" self new exampleAlignCenter saveAndShowAs: 'exampleAlignCenter.pdf' "
| report |
report := R4PReport new.
report businessCard.
report traceToTranscript.
report page grid section origin: 10 @ 10; width: 100; height: 100; border: 1; align: #center; string: 'center align'.
^report
...which produces the output...
...#createTestContentsPrintOutput: is used to create an output content method...
outputAlignCenter
"Generated on February 26, 2012 4:22:43 PM"
^'Report
page width: 252
page height: 144
margin: #(0 0 0 0)
layout: 0 252 144 0
font: #Helvetica font size: 10
page number pattern: ''<page>''
page total pattern: ''<total>''
layout pages: 1
---
page width: 252
page height: 144
maximum Y: 144 (page height - footer)
output parts: 85
0 @ 0 line: 252 @ 00.5
0 @ 10 line: 252 @ 100.5
...
9.5 @ 110 line: 110.5 @ 1101
10 @ 110 line: 10 @ 101
#(10 0 0 -10 34.155 18.215) center align'
...and #createTestMethodHexString: is used to create the byte array of the PDF document...
pdfAlignCenter
"Generated on April 20, 2012 7:35:33 AM"
^'255044462D312E330A25E2E3CFD30A312030206F626A0A3C3C092F50726F6475636572202850444634536D616C6C74616C6B20312E322E3529093E3E0A656E646F626A0A322030206F626A0A3C3C092F54797065202F436174616C6F670A092F...660A3937370A2525454F46'
...finally, #createTestMethodPrintOutput: is used to create the SUnit test method which builds the example output and checks the result. First, the output string...
testOutputAlignCenter
"Generated on February 26, 2012 4:22:53 PM
( self new createTestContentsPrintOutput: #exampleAlignCenter )
( self new exampleAlignCenter saveAndShowAs: 'exampleAlignCenter.pdf' ) "
| report |
report := self exampleAlignCenter.
report buildPDF.
self assert: report printOutput = self outputAlignCenter.
...and then the PDF array...
testPDFAlignCenter
"Generated on February 26, 2012 4:22:49 PM
( self new createTestContentsHexString: #exampleAlignCenter )
( self new exampleAlignCenter saveAndShowAs: 'exampleAlignCenter.pdf' ) "
| report |
report := self exampleAlignCenter.
self assert: (report byteArraySUnitAs: 'testAlignCenter.pdf') asHexString = self pdfAlignCenter
Class side convenience methods are available to rebuild all the output and test methods. Handy when PDF4Smalltalk changes.
----
The other project is the domain model of the application we're building at work. It's the same idea: while developing we write SUnit test that check for specific values. Typically this requires us to build complex domain resources. Once these are built, and we've checked the model manually, we add a 'capture' (the word 'snapshot' was already used in domain code) of the domain object's state that records all the domain attributes in an array, and stores the array in a data method.
How the data is stored is not that important. Most large Smalltalk applications I've worked on had some kind of meta data for domain objects which can be used to generate a data string.
What was interesting was how used the capture data vs. the regular SUnit tests. Normally, we want the tests to stop when an assert fails, but for captures we wanted the test to continue and have it generate a 'capture report' of which values were different. That's because a simple change, like adding a new domain attribute, would cause almost every capture for that domain class to fail.
After some trial and error, we have this workflow...
- if the capture only contains new or deleted attributes, rebuild the capture array, since none of the old data changed
- if any capture data change, generate a capture report (stored as a 'report' prefixed method) and continue
- if, however, a capture report already exists, cause an assert to fail
- if a capture report is generated, open a browser on the method
- if any capture reports are created, a final #assertNotCaptureReports will cause a failed assert
Having the capture test stop if an existing capture report is found allows us to selectively diagnose data issues. We've also added a button to the SUnitTool toolbar which rebuilds all the data captures. Handy when attributes are changed, which is almost daily.
On a side note: I've been at HTS now for four months, spending long hours learning and updating a 15 year old framework that was written with somewhat esoteric design patterns. I see now how lucky I've been over most of my Smalltalk career, mostly working on code that I either created myself, or developed with a team that shared common development ideals. James Robertson has a good podcast on the topic of Common Pitfalls. I think I have examples of everything he and Dave Buck talked about, plus some great ways to not interface with GemStone (and I now loathe lazy initialization, especially when deeply nested and combined with silent exception handling).
Simple things should be simple. Complex things should be possible.