Planet Squeak

blogs about Squeak, Pharo, Croquet and family
planet squeak - planet squeak es - planet squeak jp - planet croquet - planet squeak code - planet smalltalk

July 14, 2021

Craig Latta

realtime vocal harmonization with Caffeine

I’ve written a Caffeine class which, in real time, takes detected pitches from a melody and chords, and sends re-voiced versions of the chords to a harmonizer, which renders them using shifted copies of the melody. It’s an example of an aggregate audio plugin, which builds a new feature from other plugins running in Ableton Live.

re-creating a classic

Way way back in 1991, before the Auto-Tune algorithm popularized in 1998, a Canadian company called IVL Technologies developed a hardware harmonizer, the Vocalist VHM5. It generated five-part vocal harmonies, live from sung melodies and chords played via MIDI. It had a simple but effective model of vocal formants, which enabled it to shift the pitch of a sung note to natural-sounding new pitches, including correcting the pitch of the sung note. It also had very fast pitch detection.

My favorite feature, though, was how it combined those features when voicing chords. In what was called “vocoder mode”, it would adjust the pitches of incoming MIDI chords to be as close as possible to the current pitch of a sung melody, or closed voicing. If the melody moved more than half an octave away from a chord voice, the rendered chord voice would adjust by some number of octaves up or down, so as to be within half an octave of the melody. With kinetic melodies and dense chords, this becomes a simple but compelling voice-leading technique. It’s even more compelling when the voices are spatialized in a stereo or 3D audio field, with reverb, reflections, and other post-processing.

It’s also computationally inexpensive. The IVL pitch-detection and shifting algorithms were straightforward for off-the-shelf digital signal processing chips to perform, and the Auto-Tune algorithm is orders of magnitude cheaper. One of the audio plugins I use in the Ableton Live audio environment, Harmony Engine by Antares, implements Auto-Tune’s pitch shifting. Another, MIDI Guitar by Jam Origin, does polyphonic pitch detection. With these plugins, I have all the live MIDI information necessary to implement closed re-voicing, and the pitch shifting for rendering it. I suppose I would call this “automated closed-voice harmonization”.


Caffeine runs in a web browser, which, along with Live, has access to all the MIDI interfaces provided by the host operating system. Using the WebMIDI API, I can receive and schedule MIDI events in Smalltalk, exchanging music information with Live and its plugins. With MIDI as one possible transport layer, I’ve developed a Smalltalk model of music events based upon sequences and simultaneities. One kind of simultaneity is the chord, a collection of notes sounded at the same time. In my implementation, a chord performs its own re-voicing, while also taking care to send a minimum of MIDI messages to Live. For example, only the notes which were adjusted in response to a melodic change are rescheduled. The other notes simply remain on, requiring no sent messages. Caffeine also knows how many pitch-shifted copies of the melody can be created by the pitch-shifting plugin, and culls the least-recently-activated voices from chords, to remain within that number.

All told, I now have a perfect re-creation of the original Vocalist closed-voicing sound, enhanced by all the audio post-processing that Live can do.

the setup

a GK-3 hex pickup through a breakout box

Back in the day, I played chords to the VHM5 from an exotic MIDI electric guitar controller, the Zeta Mirror 6. This guitar has a hex (six-channel) pickup, and can send a separate data stream for each string. While I still have that guitar, I also have a Roland GK-3 hex pickup, which is still in production and can be moved between guitars without modifying them. Another thing I like about hex pickups is having access to the original analog signal for each string. These days I run the GK-3 through a SynQuaNon breakout module, which makes the signals available at modular levels. The main benefit of this is that I can connect the analog signals directly to my audio interface, without software drivers that may become unsupported. I have a USB GK-3 interface, but the manufacturer never updated the original 32-bit driver for it.

Contemporary computers can do polyphonic pitch detection on any audio stream, without the use of special controller hardware. While the resulting MIDI stream uses only a single channel, with no distinction between strings, it’s very convenient. The Jam Origin plugin is my favorite way to produce a polyphonic chord stream from audio.

the ROLI Lightpad

My favorite new controller for generating multi-channel chord streams is the ROLI Lightpad. It’s a MIDI Polyphonic Expression (MPE) device, using an entire 16-channel MIDI port for each instrument, and a separate MIDI channel for each note. This enables very expressive use of MIDI channel messages for representing the way a note changes after it starts. The Lightpad sends messages that track the velocity with which each finger strikes the surface, how it moves in X, Y, and Z while on the surface, and the velocity with which it leaves the surface. The surface is also a display; I use it as a five-by-five grid, which presents musical intervals in a way I find much more accessible than that of a traditional piano keyboard. There are several MPE instruments that use this grid, including the Linnstrument and the GeoShred iPad app. The Lightpad is also very portable, and modular; many of them can be connected together magnetically.

The main advantage of using MPE for vocal harmonization is associating various audio processing state with each chord voice’s separate channel. For example, the bass voice of a chord progression can have its own spatialization and equalization settings.

My chord signal path starts with an instrument, a hex or normal guitar or Lightpad. Audio and MIDI data goes from the instrument, through a host operating system MIDI interface, through Live where I can detect pitches and record, through another MIDI interface to Caffeine in a web browser, then back to Live and the pitch-shifting plugin. My melody signal path starts with a vocal performance using a microphone, through Live and pitch detection, then through pitch shifting as controlled by the chords.

Let’s Play!

Between this vocal harmonization, control of the Ableton Live API, and the Beatshifting protocol, there is great potential for communal livecoded music performance. If you’re a livecoder interested in music, I’d love to hear from you!

by Craig Latta at July 14, 2021 09:20 AM

July 11, 2021


Script to deploy Pharo Reddit

by Stéphane Ducasse at July 11, 2021 02:28 PM

[Ann] Chapter 3 of Pharo 9 by example

Here is the third chapter of the Pharo by Example For Pharo 9.0.

by Stéphane Ducasse at July 11, 2021 02:16 PM

July 08, 2021


[Ann] Chapter 2 of Pharo by Example For P9

Here is the second chapter of the Pharo by Example For Pharo 9.0.

If you see mistakes please report them at

by Stéphane Ducasse at July 08, 2021 12:36 PM

July 03, 2021


[Ann] Chapter 1 of Pharo by Example for Pharo 90


I restarted to work on a new version of Pharo by Example and this time for Pharo 90.

Here is the first chapter.

If you see mistakes please report them at


by Stéphane Ducasse at July 03, 2021 04:18 PM

July 02, 2021


Thanks you association member!

Hi Pharo Association Member.

Yes you! First we would like thank you! Your 100 or 40 Euros are important.

And we would like to show you that your membership in the Pharo association is really helping Pharo.

You concretely help Pharo!!! So thanks.

We would like to really thank you for your support because as you will see that it is important. Your contributions are making an impact. Until now, the association did not make visible how is spent the money of your membership.

Your association contributions are securing vital Pharo infrastructures. It pays:

We wanted to do more but the COVID killed our energy. Now if you have ideas do not hesitate to let us know. (you can email to

We would like to remember the advantages of being an association member:

Stef on the behalf of the Pharo Association

by Stéphane Ducasse at July 02, 2021 09:43 PM

June 29, 2021

Program in Objects

Brain Cancer?

Last night, I had a premonition in my sleep that I was going to get brain cancer. It literally scared me awake.

And do you know what was the first thought on my mind? Not that I was going to leave my friends and family behind. Not that I wouldn’t survive to the average Canadian life expectancy of 80 years (for men). Not that I would miss the upcoming Star Trek series with Michelle Yeoh (“Section 31”).

No, my first thought was whether I would live long enough to shepherd next summer’s Camp Smalltalk Supreme event to success. If I die without seeing this through, I’ll never be able to live with myself.

So, pray to whatever god you believe in that I don’t get brain cancer, please.


by smalltalkrenaissance at June 29, 2021 04:42 PM

June 25, 2021


[Ann] New consortium member: DGtal Aqua

The Pharo Consortium is very happy to announce that the DGtal Aqua Lab has joined the Consortium as an Academic Member.


– DGtal Aqua Lab:
– Pharo Consortium:

The goal of the Pharo Consortium is to allow companies and institutions to support the ongoing development and future of Pharo.
Individuals can support Pharo via the Pharo Association:

by Stéphane Ducasse at June 25, 2021 06:04 AM

June 18, 2021


Show me your tests…

Show me your tests and I will tell you who you are…

In Pharo the number of tests and their focus are steadily increasing. As of today, 88275 tests are run for each integration.

And we will continue, because tests are our motto.

Did you notice? Projects without tests are not vocable on the topic. They promote features but no idea of the trust level.

Pharo consortium

by Stéphane Ducasse at June 18, 2021 06:47 AM

June 17, 2021

Program in Objects

50th Birthday Celebrations of Programming Languages

I did a quickie survey of 50th birthday celebrations for programming languages. I was disappointed to find very few legitimate events.

Now, obviously, only programming languages created before 1972 could have had 50th birthday celebrations, languages like FORTRAN, LISP, COBOL, BASIC, and Pascal, to name the few living, prominent, surviving languages today.

For these, I found this event for FORTRAN: Fortran’s Fiftieth Birthday. Not exactly a big deal. No birthday banquet nor free swag that I could determine.

There was this event for LISP, but it was couched in a larger, general event: The Evolution of Lisp. Again, no birthday banquet nor free swag.

For the rest, some people published articles to celebrate the birthdays. Borrring.

Don’t major programming languages deserve real birthday celebrations? Maybe I’m being silly.

Will there be 50th birthday celebrations for C, Prolog, Ada?

Anyway, I invite people to attend the 50th birthday celebration for Smalltalk at Camp Smalltalk Supreme next year. It should be a real blast!

by smalltalkrenaissance at June 17, 2021 07:25 PM

June 14, 2021


[ann ] ODBC framework for Pharo

There is now an ODBC framework for Pharo, available at the pharo-rdbms github site:
This is based on the Dolphin Smalltalk Database Connection ODBC framework. Provided a suitable driver manager is installed this should work on MacOS and Linux in addition to Windows. 
Thanks to InfOil for supporting the development, to Torsten for tidying up and hosting the code, and Andy and Blair (Dolphin developers) for the original framework.  

John Aspinall

by Stéphane Ducasse at June 14, 2021 10:19 AM

June 12, 2021


New company selling PharoJS products

Hi everyone,
I’m glad to announce a new Pharo-based commercial product: PLC3000 (
It’s a SaaS solution for teaching PLC programming for factory automation. The server side is based on Zinc and the client side uses PharoJS.
This wouldn’t have been possible without the great work done by the community in large, and more specifically, the Pharo consortium. 
Thank you all,Noury

by Stéphane Ducasse at June 12, 2021 03:20 PM

June 10, 2021


Progress report 2021/06/09

We are slowly moving on to a “ready to release” status, but there are still some tasks to do and in fact we have one new short term task. Still, it does not looks but we have improve the stability and the speed of integrations, which means an overall better life and status to move on :)Also, I splitted the short term goals in easier to messure tasks, so I can remove them 😉

Short-term goals:

– Improve quality and quantity of tests in StInspector 

– Improve quality and quantity of tests in StSpotter 

– Improve quality and quantity of tests in StPlayground

– Improve quality of class comments in Spec2 framework.

– Since we are in freeze mode: fixing important bugs on Pharo9 and its components (this issue will stay here until release).

– include M1 in PharoLauncher

– Remove pharo catalog from image

Medium-term goals:

– Removal of GTSpotter

– M1 VM release.

– Release 9.0

Long-term goals: 

– Removal of remaining GTTools

– Removal of Glamour- Removal if Spec1

## Last week
– ED (Emergency debugger) fixed (the UI was revamped to work on OSWindow and the SDL2 backend, in fact).

– Pass on Spec and NewTools repositories. Now development branches are called dev-1.0 and stable branch is “Pharo9.0” (will fit better the development cycle),

– Some enhancements in the new spotter

– For Spec, enhance tests in trees/lists/dropdowns.

– Stef added some improvements to microdown (Still for P10).

– Pablo made some fixes to the test runner- … and Marcus was busy fixing bugs and integrating PRs

### This week (starting 2021-06-07):

– Some final cleanups (GTSpotter and Catalog… yes, catalog is out because for now is better not having nothing that havign something that misslead people)

– Take care about some crashes in the new M1

– adapting PharoLauncher to download M1 VMs- more on PharoLauncher command line

by Stéphane Ducasse at June 10, 2021 10:09 AM

June 05, 2021

Craig Latta

Ableton Livecoding with Caffeine

Livecoding access can tame the complexity of Ableton Live.

I’ve written a proxy system to communicate with Ableton Live from Caffeine, for interactive music composition and performance. Live includes Max for Live (M4L), an embedded version of the Max media programming system. M4L has, in turn, access to both Node.JS, a server-side JavaScript engine embedded as a separate process, and to an internal JS engine extension of its own object system. Caffeine can connect to Node.JS through a websocket, Node.JS can send messages to Max, Max can call user-written JS functions, and those JS functions can invoke the Live Object Model, an API for manipulating Live. This stack of APIs also supports returning results back over the websocket, and for establishing callbacks.

getting connected

Caffeine creates a websocket connection to a server running in M4L’s Node.JS, using the JS WebSocket function provided by the web browser. A Caffeine object can use this connection to send a JSON string describing a Live function it would like to invoke. Node.JS passes the JSON string to Max, through an output of a Max object in a Max program, or patcher:

connecting the Node.JS server with JS Live API function invocation

Max is a visual dataflow system, in which objects inputs and outputs are connected, and their functions are run by a real-time scheduler. There are two special objects in the patcher above. The first is node.script, which controls the operation of a Node.JS script. It’s running the Node.JS script “caffeine-server.js”, which creates a websocket server. That script has access to a Max API, which it uses to send data through the output of the node.script object.

The second special object is js, which runs “caffeine-max.js”. That script parses the JSON function invocation request sent by Caffeine, invokes the desired Live API function, and sends the result back to Caffeine through the Node.JS server.


With this infrastructure in place, we can create a proxy object system in Caffeine. In class Live, we can write a method which invokes Live functions:

invoking a Live function from Caffeine

This method uses a SharedQueue for each remote message sent; the JS bridge callback process delivers results to them. This lets us nest remote message sends among multiple processes. The JSON data identifies the function and argument of the invocation, the identifier of receiving Live object, and the desired Smalltalk class of the result.

The LiveObject proxy class can use this invoking function from its doesNotUnderstand method:

forwarding a message from a proxy

Now that we have message forwarding, we can represent the entire Live API as browsable Smalltalk classes. I always find this of huge benefit when doing mashups with external code libraries, but especially so with Live. The Live API is massive, and while the documentation is complete, it’s not very readable. It’s much more pleasant to learn about the API with the Smalltalk browsing tools. As usual, we can extend the API with composite methods of our own, aggregating multiple Live API calls into one. With this we can effectively extend the Live API with new features.

extending the Live API

One area of Live API extension where I’m working now is in song composition. Live has an Arrangement view, for a traditional recording studio workflow, and a Session view, for interactive performance. I find the “scenes” feature of the Session view very useful for sketching song sections, but Live’s support for playing them in different orders is minimal. With Caffeine objects representing scenes, I can compose larger structures from them, and play them however I like.

How would you extend the Live API? How would you simplify it?

The Node.JS server, JS proxying code, and the Max patcher that connects them are available as a self-contained M4L device, which can be applied to any Live track. Look for it in the devices folder of the Caffeine repository.

by Craig Latta at June 05, 2021 02:02 AM

June 04, 2021


Promote your Pharo project via the PharoProject Tweet account …

Tweeting all the interesting stuff is a lot of work.. if you released some library or wrote a blog post (or just found something interesting) where you think “that should be tweeted by @pharoproject”, now we have a submit link:[9:53 AM](the link is in the “bio” of the twitter account, thus easy to find if needed)

by Stéphane Ducasse at June 04, 2021 07:02 PM

May 25, 2021


Our journey to JIT and other beasts.

Since six months we are working different optimizations such as basic block introduction in CogIt, basic block reordering to maximise fallthrough and more recently we started to write tests and fix a bit the Scorch native optimizer developed by C. Béra.

This is an amazing way to learn a domain and I’m learning a lot. Our idea is to use scenario to assess the weak points of our infrastructure and to be able to fix it in the future. Our goal is to get a better Slang, a better and more flexible VM and nativizer. And step by step we are learning things that I would have never thought I would be learning, so this is an amazing and cool feeling.

During this journey we found strange logic in Slang (the VM generator), Clang’s bugs generating assembly code that we cannot even dissambled but for plain normal little C functions, extended the ARM back-end to support more exotic instructions than the JIT on such architecture. And recently we got some a surprising behavior on carry and underflow on ARM.

Most of the work is done by Guillermo Polito and I love spending days pair programming with him on that (I’m the lurker but learning lurker). Guille wrote a little anedocte on the carry behavior on ARM and I want to share it with you.

Stef (having fun with JIT and other beasts).

by Stéphane Ducasse at May 25, 2021 05:25 PM

May 14, 2021


Nice little post on Matrix

by Stéphane Ducasse at May 14, 2021 02:55 PM

May 10, 2021

Program in Objects

Announcing Camp Smalltalk Supreme

I have the green light to proceed with Camp Smalltalk Supreme, the 2022 50th anniversary edition of Camp Smalltalk.

It’s scheduled for June 10-12, 2022 at Ryerson University in Toronto, Canada.

I’ve confirmed Adele Goldberg, Dan Ingalls, and Kent Beck as keynote speakers for this very special event! Adele and Dan were part of the original team at Xerox PARC, and Kent is a renowned Smalltalk pioneer.

Here is the official website:
Here is the promo video:
Here is the GoFundMe campaign:

I hope many people will attend this event. It should be a blast.

by smalltalkrenaissance at May 10, 2021 03:01 AM

May 01, 2021


Pharo 90 is getting beta


After a long alpha version and a large list of enhancements and new exciting features, Pharo 90 is entering in beta freeze.

What does this means?

  1. No more features, just bugfixes
  2. No more fixes of things that can wait until Pharo 10

In two/three weeks we will open the Pharo 10 development branch, up to then, please help fixing what’s important for Pharo 90! We tagged the issues to reflect such information (Milestones + importance).

The Pharo crew

by Stéphane Ducasse at May 01, 2021 10:57 AM

April 29, 2021


Advanced stepping with the new Pharo debugger

The new Pharo debugger is getting exciting after object-centric features we can now define our own commnands

Well done

by Stéphane Ducasse at April 29, 2021 07:21 AM

April 27, 2021

Craig Latta

Beatshifting: playing music in sync and out of phase

two Beatshifting timelines

I’ve written a Caffeine app implementation of the Beatshifting algorithm, for collaborative remote music performance that is synchronized and out-of-phase. Beatshifting uses network latency as a rhythmic element, using offsets from beats as timestamps, with a shared metronome and score.

I was inspired to write the Beatshifting app by NINJAM, a similar system that has hosted many hours of joyous sessions. There are a few interesting twists I think I can bring to the technology, through late-binding of audio rendering.

NINJAM also synchronizes distributed streams of rhythmic music. It works by using a server to collect an entire measure of audio from the performers’ timestamped streams, stamps them all with an upcoming measure number, and sends them back to each performer. Each performer’s system plays the collected measures with the start times aligned. In effect, each performer plays along with what everyone else did a measure ago. Each performer must receive audio only by the start of the upcoming measure, rather than fast enough to create the illusion of simultaneity.

Beatshifting gives more control over the session to each performer, and to an audience as well. Each performer can modify not only the local volume levels of the other performers, but also their delays and instruments. Each performer can also change the tempo and time signature of the session. A session can have an audience as well, and each audience member is really a performer who hasn’t played anything yet.

It’s straightforward to have an arbitrary number of participants in a session because Beatshifting takes the form of a web app. Each participant only needs to visit a session link in a web browser, rather than use a special digital audio workstation (DAW) app. By default, Beatshifting uses MIDI event messages instead of audio, using much less bandwidth even with a large group.

To deliver events to each participant’s web browser, Beatshifting uses the Croquet replication service. Croquet is able to replicate and synchronize any JavaScript object in every participant’s web browser, up to 60 times per second. Beatshifting uses this to provide a shared score. Music events like notes and fader movements can be scheduled into the score by any participant, and from code run by the score itself.

One piece of code the score runs broadcasts events indicating that measures have elapsed, so that the web browsers can render metronome clicks. There are three kinds of metronome clicks, for ticks, beats, and measures. For example, with a time signature of 6/8, there are two beats per measure, and three ticks per beat. Each tick is an eighth-note, so each beat is a dotted-quarter note. The sequence of clicks one hears is:

At a tempo of 120 beats per minute, or 240 clicks per 60,000 milliseconds, there are 250 milliseconds between clicks. Each time a web browser receives a measure-elapsed event, it schedules MIDI events for the next measure’s clicks with the local MIDI output interface. Since each web browser knows the starting time of the session in its output MIDI interface’s timescale, it can calculate the timestamps of all ensuing clicks.

When a performer plays a note, their web browser notes the offset in milliseconds between when the note was played and the time of the most recent click. The web browser then publishes an event-scheduling message, to which the score is subscribed. The score then broadcasts a note-played event to all the web browsers. Again, it’s up to each web browser to schedule a corresponding MIDI note with its local MIDI output interface. The local timestamp of that note is chosen to be the same millisecond offset from some future click point. How far in the future that click is can be chosen based on who played the note, or any other element of the event’s data. Each web browser can also choose other parameters for each event, like instrument, volume level, and panning position.

Quantities like tempo are part of the score’s state, and can be changed by any performer or audience member. Croquet ensures that the changed JavaScript variables are synchronized in all the participants’ web browsers.

With so many decisions about how music events are rendered left to each web browser, the mix that each participant hears can be wildly different. The only constants are the millisecond beat offsets of each performer’s notes. I think it’ll be fun to compare recordings of these mixes after the fact, and to make new ones from individual recorded tracks.

There’s no server that any participant needs to set up, and the Croquet service knows nothing of the Beatshifting protocol. This makes it very easy to start and join new sessions.

next steps

The current Beatshifting UI has controls for joining a session, enabling the local scheduling of metronome clicks, and changing the tempo and time signature of a session.

the current Beatshifting UI

If one is using a MIDI output interface connected to a DAW, then one may use the DAW to control instruments, volume, panning, and so on. I’d also like to provide the option of all MIDI event rendering performed by the web browser, and a UI for controlling and recording that. I’ve established the use of the ToneJS audio framework for rendering events, and am now developing the UI.

I led a debut performance of Beatshifting as part of the Netherlands Coding Live concert series, on 23 April 2021.

I’ve written an animated 3D visualization of the Beatshifting algorithm, which can be driven from live session data. This movie is an annotated slow-motion version:

visualizing the Beatshifting algorithm

I’m excited about the creative potential of Beatshifting sessions. Please contact me if you’re interested in playing or coding for this medium!

by Craig Latta at April 27, 2021 12:22 AM

April 25, 2021


JIT VM for M1 beta testers


Just read on discord that you can download now the new ARM64 JIT vm in that machine. Just need to do

wget -O - | bash


by Stéphane Ducasse at April 25, 2021 08:47 AM

April 02, 2021


Write your own extension to the Pharo 90 debugger

Well done the debugging department of Pharo

by Stéphane Ducasse at April 02, 2021 08:51 AM

April 01, 2021


Pharo 90 refactoring support improves steadily…

Check extract method for a productivity boost.

Well done Evelyn from Semantics S.R.L.

by Stéphane Ducasse at April 01, 2021 09:46 AM

March 18, 2021

Pierce Ng

Dual Boot Windows 10 and Xubuntu 20.04, Two Disks, LUKS

I've set up dual boot on my laptop as per the post title. The article is long because of the many screenshots and as such has its own page.

March 18, 2021 09:56 PM

March 12, 2021


PharoJS on Pharo90

Hi everyone,
We have been working on porting PharoJS to Pharo 9 for a while now.And we managed to reach the end of the tunnel this week.All PharoJS tests are now green on Pharo 9.
Find out more at:
Dave & Noury

by Stéphane Ducasse at March 12, 2021 04:02 PM

[Ann] Pharo accepted as GSOC

Dear all,
great news I want to share with you: Pharo has been selected to be part of GSOC 2021
Thank you to the great team of admins for making this happen: Oleksandr Zaitsev, Gordana Rakic and Juan Pablo Sandoval Alcocer !

We will send updates soon on the student selection process soon.Regards,– 
Serge Stinckwich

by Stéphane Ducasse at March 12, 2021 02:02 PM

March 08, 2021


The magic of being able to debug locally a exception produced in production :)

by Stéphane Ducasse at March 08, 2021 08:39 PM

February 19, 2021


More than 2000 more Unit tests!

By improving the SUnit logic run on our build servers, we are now running more than 2000 unit tests that were ignored in case of parametrized tests.

Lessons learned: avoid duplication and different logic because duplication often bits you.

Pharo consortium.

by Stéphane Ducasse at February 19, 2021 11:55 AM

February 16, 2021


New VM for M1 machines for testing

Hello happy Pharoers
Today we could access our building where the M1 machine is and Pablo packaged itso that you can test the first version. 
Pablo wrote a little blog post for you.
Let us know since we do not have the M1 at hand and waiting to be able to make it accessible from our build farm… but we are not responsible for it and we were waiting.

The crew was fixing other VM glitches. So we will be ready soon to focus on the Jit version.


by Stéphane Ducasse at February 16, 2021 07:41 PM