Analyzing code using the Benchmark Workshop
The Benchmark Workshop (or the Stats tool) unifies the activities of benching, profiling, and tracing Smalltalk code. In the past, Smalltalk was difficult to benchmark and optimize because execution time could vary widely between runs. Often, it was hard to tell which code changes led to improvements because you could not accurately measure small improvements. The use of unstable results often led to erroneous conclusions and wasted effort.
The Benchmark Workshop solves these problems by providing an environment for measuring, optimizing, and tracking the effects of code changes over time.
First, you write a bench method to measure the critical portions of the code. Next, you run the code to establish a stable baseline time. The same code can be profiled (a technique that samples the execution stack) or fully traced. By examining the results, you can determine usage patterns and make statements such as: "17% of the time was spent in CLDT". Hot spots and problem areas are highlighted on an application, class, and method basis.
Thus, to optimize your code using the Benchmark Workshop, you follow this iterative process:
• Write the bench method
• Build the benchmark
• Stabilize the benchmark
• View the results
• Improve the code
Last modified date: 01/29/2015