Planet Squeak

blogs about Squeak, Pharo, Croquet and family
planet squeak - planet squeak es - planet squeak jp - planet croquet - planet squeak code - planet smalltalk

July 24, 2017


Pharo 6.1 (summer) released!

We are releasing Pharo 6.1.
Usually, between each major version we just apply bugfixes changing the build number and not announcing new versions but this time is different since the fixes applied required a new VM.
The principal reason for the new version is to update Iceberg support, bringing it to macOS 64bits version.
So, now Pharo 6.1 comes with Iceberg 0.5.5, which includes:
– running on macOS 64bits
– adds cherry pick
– adds major improvements on performance for big repositories
– adds pull request review plugin
– repositories browser: group branches by remote
– adds bitbucket and gitlab to recognised providers on metacello integration
– uses libgit v0.25.1 as backend
– several bugfixes
Other important change:
– linux vm by default is now vm threaded heartbeat.
We still miss 64bits Windows (sorry for that), but we are getting there. I hope to have it running right after ESUG.
To download 6.1 version, you can go to page, or with zeroconf:
wget -O- | bash

by Stéphane Ducasse at July 24, 2017 08:01 PM

Craig Latta

Livecoding other tabs with the Chrome Remote Debugging Protocol

Chrome Debugging Protocol

We’ve seen how to use Caffeine to livecode the webpage in which we’re running. With its support for the Chrome Remote Debugging Protocol (CRDP), we can also use it to livecode every other page loaded in the web browser.

Some Help From the Inside

To make this work, we need to coordinate with the Chrome runtime engine. For CRDP, there are two ways of doing this. One is to communicate using a WebSocket connection; I wrote about this last year. This is useful when the CRDP client and target pages are running in two different web browsers (possibly on two different machines), but with the downside of starting the target web browser in a special way (so that it starts a conventional webserver).

The other way, possible when both the CRDP client and target pages are in the same web browser, is to use a Chrome extension. The extension can communicate with the client page over an internal port object, created by the chrome.runtime API, and expose the CRDP APIs. The web browser need not be started in a special way, it just needs to have the extension installed. I’ve published a Caffeine Helper extension, available on the Chrome Web Store. Once installed, the extension coordinates communication between Caffeine and the CRDP.

Attaching to a Tab

In Caffeine, we create a connection to the extension by creating an instance of CaffeineExtension:

CaffeineExtension new inspect

As far as Chrome is concerned, Caffeine is now a debugger, just like the built-in DevTools. (In fact, the DevTools do what they do by using the very same CRDP APIs; they’re just another JavaScript application, like Caffeine is.) Let’s open a webpage in another tab, for us to manipulate. The Google homepage makes for a familiar example. We can attach to it, from the inspector we just opened, by evaluating:

self attachToTabWithTitle: 'Google'

Changing Feelings

Now let’s change something on the page. We’ll change the text of the “I’m Feeling Lucky” button. We can get a reference to it with:

tabs onlyOne find: 'Feeling'

When we attached to the tab, the tabs instance variable of our CaffeineExtension object got an instance of ChromeTab added to it. ChromeTabs provide a unified message interface to all the CRDP APIs, also known as domains. The DOM domain has a search function, which we can use to find the “I’m Feeling Lucky” button. The CaffeineExtension>>find: method which uses that function answers a collection of search results objects. Each search result object is a proxy for a JavaScript DOM object in the Google page, an instance of the ChromeRemoteObject class.

In the picture above, you can see an inspector on a ChromeRemoteObject corresponding to the “I’m Feeling Lucky” button, an HTMLInputElement DOM object. Like the JSObjectProxies we use to communicate with JavaScript objects in our own page, ChromeRemoteObjects support normal Smalltalk messaging, making the JavaScript DOM objects in our attached page seem like local Smalltalk objects. We only need to know which messages to send. In this case, we send the messages of HTMLInputElement.

As with the JavaScript objects of our own page, instead of having to look up external documentation for messages, we can use subclasses of JSObject to document them. In this case, we can use an instance of the JSObject subclass HTMLInputElement. Its proxy instance variable will be our ChromeRemoteObject instead of a JSObjectProxy.

For the first message to our remote HTMLInputElement, we’ll change the button label text, by changing the element’s value property:

self at: #value put: 'I''m Feeling Happy'

The Potential for Dynamic Web Development

The change we made happens immediately, just as if we had done it from the Chrome DevTools console. We’re taking advantage of JavaScript’s inherent livecoding nature, from an environment which can be much more comfortable and powerful than DevTools. The form of web applications need not be static files, although that’s a convenient intermediate form for webservers to deliver. With generalized messaging connectivity to the DOM of every page in a web browser, and with other web browsers, we have a far more powerful editing medium. Web applications are dynamic media when people are using them, and they can be that way when we develop them, too.

What shall we do next?


by Craig Latta at July 24, 2017 07:40 PM

Torsten Bergmann

Pharo 6.1 released

New image and new VM for better Git support. Read more.

by Torsten ( at July 24, 2017 07:34 PM

July 23, 2017


System Monitoring Images & Nagios

I just made it nagios compatible. I developed it because I’m using munin [1]. You can look at this [2] blog post how to do it. If you have questions just ask.

by Stéphane Ducasse at July 23, 2017 11:44 AM

July 21, 2017

PharoWeekly API

I made a API tool here:!/~pdebruic/SegmentIO

and they take the logging events and can send it to any of these 200+ tools:

Last time I checked it was working but its been a while.  Unless they’ve
changed things dramatically it should work.


by Stéphane Ducasse at July 21, 2017 04:15 PM

Benoit St-Jean

La chanson du jour (1318)

Toujours un brin freakant de ré-entendre les paroles de leurs chansons en sachant que Chester s’est suicidé.  Après coup, ça glace le sang.  En rétrospective, c’était écrit dans le Ciel, c’était saupoudré un peu partout dans les paroles.  Freakant!

Breaking The Habit de Linkin Park.

I don’t want to be the one
The battles always choose
‘Cause inside I realize
That I’m the one confused

Classé dans:music, musique Tagged: Breaking The Habit, Linkin Park

by endormitoire at July 21, 2017 10:34 AM

La chanson du jour (1317)

These Dreams de Heart.

These dreams go on when I close my eyes
Every second of the night I live another life
These dreams that sleep when it’s cold outside
Every moment I’m awake the further I’m away

Classé dans:music, musique Tagged: Heart, These Dreams

by endormitoire at July 21, 2017 09:57 AM

La chanson du jour (1316)

Sweet Home Alabama de Lynyrd Skynyrd.

Sweet home Alabama
Where the skies are so blue
Sweet home Alabama
Lord, I’m coming home to you

Classé dans:music, musique Tagged: Lynyrd Skynyrd, Sweet Home Alabama

by endormitoire at July 21, 2017 09:46 AM

La chanson du jour (1315)

Crazy Train de Ozzy Osbourne.

Mental wounds not healing
Who and what’s to blame
I’m goin’ off the rails on a crazy train
I’m goin’ off the rails on a crazy train

Classé dans:music, musique Tagged: Crazy Train, Ozzy Osbourne

by endormitoire at July 21, 2017 09:39 AM

La chanson du jour (1314)

No One Like You de The Scorpions.

There’s no one like you
I can’t wait for the nights with you
I imagine the things we’ll do
I just want to be loved by you

Classé dans:music, musique Tagged: No One Like You, The Scorpions

by endormitoire at July 21, 2017 09:29 AM

La chanson du jour (1313)

School de Supertramp.

After school is over you’re playing in the park
Don’t be out too late, don’t let it get too dark

Classé dans:music, musique Tagged: School, Supertramp

by endormitoire at July 21, 2017 09:24 AM

La chanson du jour (1312)

Sleepy Maggie de Ashley MacIsaac.

Classé dans:music, musique Tagged: Ashley MacIsaac, Sleepy Maggie

by endormitoire at July 21, 2017 09:15 AM

July 20, 2017

Torsten Bergmann

Sista Open Alpha

The Cog VM already made a huge difference in performance for the OpenSmalltalk VM shared by Squeak, Pharo, Cuis and Newspeak. But now Sista - the optimizing JIT is getting open alpha and it looks good to increase performance even more. Read here.

by Torsten ( at July 20, 2017 06:26 PM

July 19, 2017


Free Ephemeric Cloud for Members

Pharo cloud… is now available for free for Pharo association members.

by Stéphane Ducasse at July 19, 2017 03:12 PM

Sista: the Optimizing JIT for Pharo getting open-alpha

Another great blog post from Clement Bera one of the main architect of the forth coming optimising JIT for Pharo


by Stéphane Ducasse at July 19, 2017 02:09 PM

Clement Bera

Sista: open alpha release

Hi everyone,

It is now time to make an open alpha release of the Sista VM. As all alpha releases, it is reserved for VM developers (the release is not relevant for non VM developers, and clearly no one should deploy any application on it yet). Last year we had a closed apha release with a couple people involved such as Tim Felgentreff who added support for the Sista VM builds in the Squeak speed center after tuning the optimisation settings.

The main goal of the Sista VM is to add adaptive optimisations such as speculative inlining in Cog’s JIT compiler using type information present in the inline caches. Such optimisations both improve Cog’s performance and allow developers to write easy-to-read code over fast-to-execute code without performance overhead (typically, #do: is the same performance as #to:do:).


As shown in the following figure generated from the Squeak speed center data, benchmarks are typically in-between 1.5x and 5x faster on the Sista VM than the current production VM. On the figure, the time to run the bench is represented (hence, smaller columns implies less time spent in the bench and faster VM). Four columns are shown for each benchmark:


The image is extracted from my Ph.D, where one can find all the relevant data to reproduce the benchmarks.

In practice, on real-application benchmarks (such as the TCAP benchmark not shown in the figure), the Sista runtime is around 1.5x times faster. Specific smaller benchmarks sometimes show more significant speed-ups (JSON parsing, bench (c) in the figure, showing 5x), or no speed-up at all (Mandelbrot, bench (i) in the figure, the time is spent in double floating-pointer arithmetic and I did not implement double optimisations in Sista).


For this first release, the main focus has been on closure inlining and to get some decent benchmark results to get people looking for an efficient Smalltalk interested.

Several optimisations (String comparisons, inlined object allocations, unchecked array accesses) show a 1.5x times speed-up on benchmarks where those operations are intensive, but based on profiling on larger application (for example the Pharo IDE) the speed-up comes mainly from closure inlining.

Some benchmarks in the benchmark suite focus on other things such as 32bits large integers or double floating pointers arithmetic. These benchmarks typically used inlined loops (#to:do:, etc.) and hence don’t really benefit much from the runtime compiler.

Naming convention

Just a little discussion on the names not to confuse everyone…

Sista is the name of the overall infrastructure/runtime.

Scorch is the bytecode to bytecode optimising JIT, written in Smalltalk. It relies on Cogit as a back-end to generate machine code. It can ask Cogit for specific things such as the inline cache data of a specific method.

Cogit is the bytecode the machine code JIT compiler. It is used alone as a baseline JIT and can be combined with Scorch to be used as an optimising JIT.

The following figure summarises the Sista architecture and the interactions between the frameworks:
Screen Shot 2017-07-17 at 11.05.38.png

Overview of the runtime compiler Scorch

Scorch is called by Cogit on a context with a tripping counter (i.e., a portion of code executed many times). The optimiser then:

  1. Select a context to optimise (always a method context)
  2. Decompiles the context method to a SSA IR.
  3. Performs a set of optimisations
  4. Generates back an optimised compiled method
  5. Installs the optimised method and register its dependencies

In step 1, Scorch looks for the context defining closures on stack. The typical case is that the Array>>#do: method has a tripping counter. Optimising Array>>#do: won’t make any sense if the optimiser cannot optimise the closure evaluated at each iteration of the loop. The optimiser typically selects the sender context of Array>>#do: for optimisation, so later in the optimisation process the closure creation and evaluation will be removed.

In step 2, Scorch generates a control flow graph of basic blocks, each basic block having a linear sequence of instructions. This step includes heavy canonicalisation and annotation of the representation (basicBlocks are sorted in reverse postOrder, annotated with the dominator tree, loops are canonicalised, sends are annotated with runtime information from Cogit, the minimum set of phis is computed, etc.).

In step 3, Scorch performs a set of optimisations on the control flow graph. Multiple inlining phases happen, where the goal is to inline code in nested loops, to inline short methods and to inline code that would lead to constant folding or closure inlining. Part of the inlining phase consists in removing temp vectors (once the closures are inlined). Aside from inlining, one optimisation phase focuses on loops, hoisting code out and in rare cases unrolling them. The other phases consist of dead branches removal, better SmallInteger comparisons / branches pipeline, redundant type-check removal, common sub-expression elimination, unused side-effect free instruction elimination, head read/write redundancy elimination and other minor things like that.

In step 4, Scorch makes small changes to get the representation in a proper state for code generation (some instructions are expanded, the single return point is split in multiple ones, etc.). It then analyses the representation to figure out which value will become a temporary variable and which value will become spilled on stack. Future temporaries are then assigned a temp index. The temp index is assigned first by coalescing phis (to decrease temp writes) and second through graph coloring (to use the least number of temps). Once done, the representation is traversed generating bytecodes for each basic block. The size of each jump is then computed, and the final optimised method is generated.

In step 5, Scorch installs the optimised method, potentially in a subclass of the original method (customisation). The optimised method has a special literal which includes all the deoptimisation metadata to reconstruct the runtime stack with non optimised code at each interrupt point. In addition, Scorch adds to the dependency manager a list of selectors which requires the optimised method to be discarded if a new method with this selector is installed (look-up results could change, confusing speculative inlining, etc.).

Next optimisations to implement

Apart from rethinking the optimisation planning and improving all the existing optimisations, new optimisations may be added. On the top of my head, the major next things I can think of are probably:

There are also multiple minor things to do here and there. Improving loop optimisations would likely yield significant speed-up too.

How to get/build a Sista image and VM

1) Get the Pharo 6 release image and VM, for example by doing in the command line:
wget --quiet -O - | bash

2) Execute the following code (DoIt) to prepare the image:

"Add special selector for trap instruction"
Smalltalk specialObjectsArray at: 60 put: #trapTripped.
"Disable hot spot detection (to load the Scorch code)"
Smalltalk specialObjectsArray at: 59 put: nil.
"Recompile the fetch mourner primitive which has strange side-effect with alternate bytecode set and closures"
WeakArray class compile: 'primitiveFetchMourner ^ nil' classified: #patch.
"Enable FullBlockClosure and alternate bytecode set"
CompilationContext bytecodeBackend: OpalEncoderForSistaV1.
CompilationContext usesFullBlockClosure: true.
OpalCompiler recompileAll.

3) Load Scorch by using this DoIt:
Metacello new
   repository: '';
   configuration: 'Scorch';
   version: #stable;

4) Go to
and compile a squeak.sista.spur VM.

Alternatively, pre-compiled VMs are available on Cog’s Bintray (Some OS/processors may not be available though).

5) Restart your image with the Sista VM. You can now execute:

"Opening Transcript"
Transcript open.
"Reference value"
25 tinyBenchmarks logCr.
"Enable Scorch optimizations"
Smalltalk specialObjectsArray at: 59 put: #conditionalBranchCounterTrippedOn:.
"Optimised value"
25 tinyBenchmarks logCr.
"Disable Scorch optimizations"
Smalltalk specialObjectsArray at: 59 put: nil.

It should show on Transcript something like that (copied from my machine):

'2486945962 bytecodes/sec; 150270417 sends/sec'
Counter tripped in Integer>>#benchmark
Installed SmallInteger>>#tinyBenchmarks in SmallInteger
Counter tripped in Integer>>#benchmark
Installed SmallInteger>>#benchmark in SmallInteger
Counter tripped in Integer>>#benchFib
Installed SmallInteger>>#benchFib in SmallInteger
'3849624060 bytecodes/sec; 271220541 sends/sec'

That code was run on the Sista runtime.

6) Optionally, add in Monticello the repo and load the 2 packages to have a set of benchmarks to toy with.

Note when toying around

If you want to experiment with the Sista runtime, you need to note:

Another interesting thing is to do:
optimisedMethod metadata printDebugInfo
which shows [most of] inlined code in the given optimised method, and allows one to try to understand the optimiser optimisation decisions. In the case of tinyBenchmark, the method benchmark would show something like (based on my machine):

   52) atAllPut: Inlined (SequenceableCollection>>#atAllPut:) [0]
     41) from:to:put: Inlined (SequenceableCollection>>#from:to:put:) [1]
       56) min: Inlined (Magnitude>>#min:) [2]

The number at the beginning (52) is the bytecode offset of the send inlined, followed by the selector of the send (atAllPut:), followed by the method inlined (SequenceableCollection>>#atAllPut:). In some cases there may be several methods inlined. The last number ([0]) is the order in which the methods are inlined.

The indentation means the inlining depth (#from:to:put: is inlined in #atAllPut: itself inlined in #benchmark for example).

In the case of benchmark, other methods were inlined, but they were proven to be non-failing primitives, so they are not shown here.

In the case of non-local return inlining, more complex logic is involved and the debug info may be incomplete.

System integration: Some TODOs

Many things are partially done in the IDE. Customised methods are currently shown in the class browser. It is possible at each interrupt point to deoptimise an optimised context to multiple deoptimised contexts, but the debugger ode needs to be updated to do so. Hooks for method installation need to be added to correctly ask Scorch to discard optimised methods that are dependent on the selector.

Another thing is the thisContext keyword, which now shows sometimes optimised context. Again, on interrupt point, it is possible to request deoptimisation, but no IDE tool is doing so right now.

Lastly, the deoptimiser is written in Pharo, but is completely independent from the rest of the code and needs love. Some parts have still dependencies, leading to crashes.

I hope you enjoyed the post. Please report on the vm-dev mailing list any experiment with the Sista VM.





by Clement Bera at July 19, 2017 01:06 PM

Hernan Morales

Iliad version 0.9.6 released

Lately I have been playing with the Iliad Web Framework, and decided to publish some updates which I want to share with you:A new web site based in GitHub pages, with install instructions, screenshots and links to pre-loaded images and documentation. Updated Iliad to load in Pharo 6.0 Added an Iliad Control Panel, based in the Seaside one, which allows to create/inspect/remove web server adapters

by Hernán ( at July 19, 2017 05:36 AM

July 18, 2017


Keccak-256 hashing algorithm

Hi there!

I am just releasing the first version of the Keccak-256 hashing algorithm.
You can find it at:
This  version is based on a javascript implementation:
This implementation supports  as message: bytearray and ascii and utf-8 strings.
Soon i will be adding support to the rest of the Keccak family of hashing functions, since the implementations is quite configurable, is just need to add some constructors with specific configurations and tests for this other cases of usage.
Here a onliner for building an image with the version v0.1:
 wget -O- | bash
Hope you find it useful 🙂

by Stéphane Ducasse at July 18, 2017 09:12 PM

Torsten Bergmann


Pierce extended Sven's excellent " in 10 elegant classes" with even more. Read more.

by Torsten ( at July 18, 2017 09:52 AM

July 16, 2017

Pierce Ng


I have started a booklet on Pharo, hopefully the first of, um, more than one. It is entitled RedditSt20, on my fork and extension of Sven Van Caekenberghe's excellent " in 10 elegant classes", to cover the following in another 10 or so classes:

The book is hosted on Github. Source code is on Smalltalkhub.

The book is being written using Pillar, of course. Note that the Pharo 5 version of Pillar that I downloaded from InriaCI doesn't work - the supporting makefiles aren't able to obtain the output of "./pillar introspect <something>". Use the Pharo 6 version.

by Pierce Ng at July 16, 2017 02:33 AM

July 15, 2017

Torsten Bergmann


PharoLambda is a simple example and GitLab build script for deploying a minimal Pharo Smalltalk image/vm to AWS Lambda.

by Torsten ( at July 15, 2017 08:49 PM

July 14, 2017

Torsten Bergmann

Teapot: Web Programming Made Easy

Nice article on how to write a web application with Pharos Teapot framework.

by Torsten ( at July 14, 2017 08:11 PM

Iceberg 0.5

A new release of Iceberg for Pharo is available to work with Git.

by Torsten ( at July 14, 2017 08:04 PM


An add in for Pharos Quality assistant. Read more

by Torsten ( at July 14, 2017 08:03 PM

Visualization of Regular Lattice with Pharo & Roassal

by Torsten ( at July 14, 2017 08:00 PM


Debris Publishing have a new version of Quuve - an investment management platform written in Pharo and Seaside#. It is another success story and another example of "things people built with Smalltalk". They use my Twitter Bootstrap for Seaside project. Reminds me that I wanted to updated the project if my spare time permits. Full video demo is here.

by Torsten ( at July 14, 2017 12:32 PM

Seaside 3.2.4 released

Seaside 3.2.4. was released today.

by Torsten ( at July 14, 2017 12:20 PM

Benoit St-Jean

La chanson du jour (1311)

Moonglow par Benny Goodman.

Classé dans:music, musique Tagged: Benny Goodman, Moonglow

by endormitoire at July 14, 2017 11:54 AM

La chanson du jour (1310)

Sweetest Taboo de Sade.

There’s a quiet storm
And it never felt like this before
There’s a quiet storm
That is you

Classé dans:music, musique Tagged: Sade, Sweetest Taboo

by endormitoire at July 14, 2017 11:48 AM

La chanson du jour (1309)

Toxicity de System of a Down.

Now, what do you own the world?
How do you own disorder, disorder

Classé dans:music, musique Tagged: System of a Down, Toxicity

by endormitoire at July 14, 2017 11:40 AM