Planet Squeak

blogs about Squeak, Pharo, Croquet and family
planet squeak - planet squeak es - planet squeak jp - planet croquet - planet squeak code - planet smalltalk

October 21, 2014

Torsten Bergmann

Amber 0.13 is released

Read more.

by Torsten (noreply@blogger.com) at October 21, 2014 07:13 AM

October 19, 2014

Julian Fitzell

Declaring Seaside sub-components in the #children method

People often ask why they need to define the #children method on Seaside components. I wrote a long email today to the mailing list explaining how #children is currently used and why it's important and I thought it might be useful to post it here as well so it's easier to find and link to. 

When you render a child component you are implicitly defining a tree of components. #children simply allows the framework to walk the component tree explicitly. The reasons for needing to walk the tree explicitly have changed over time, which is part of the reason for the confusion.

At one point, for example, we used to walk the tree to give each component a chance to handle callbacks, so if your component wasn't in #children it would never even have seen its callbacks. That is no longer the case (which is actually a bit of a shame because decorations can no longer intercept them, but I digress).

If you look in the image for users of WAVisiblePresenterGuide and WAAllPresenterGuide, you will see the current cases where we need to traverse the tree:
  1. Calling #updateStates: for snapshotting/backtracking
  2. Calling #initialRequest: when a new session is started
  3. Executing tasks (they need to execute outside of the render phase to make sure the render phase does not have side effects)
  4. Calling #updateRoot:
  5. Calling #updateUrl:
  6. Displaying Halos for each component in development mode
  7. Generating the navigation path in WATree
  8. Detecting which components are visible/active to support delegation (#call:/#show:)
Keep in mind that basically all these things happen before rendering, so if you create new components inside #renderContentOn: they'll miss out: you should try to create your sub-components either when your component is initialized or during a callback. If your child component doesn't rely on any of the above (and doesn't use any child components itself that rely on any of these things) then technically everything will work fine without adding it to #children. But keep in mind that:
Finally, components are stateful by definition, so if you don't feel the need to persist your component between render phases, it probably shouldn't be a component. For stateless rendering you're better to subclass WAPainter directly or even WABrush: both of these are intended to be used and then thrown away and they will make it clearer in your mind whether or not you're using on things that depend on #children.

by Julian Fitzell (noreply@blogger.com) at October 19, 2014 12:53 PM

October 17, 2014

Torsten Bergmann

Phratch 4.0 release

Phratch 4.0 is available. Read more or grab it from http://www.phratch.com/.

by Torsten (noreply@blogger.com) at October 17, 2014 11:24 AM

Spur trunk image available

New Spur trunk image is available.

by Torsten (noreply@blogger.com) at October 17, 2014 07:57 AM

October 16, 2014

Torsten Bergmann

PharoNOS - Pharo No Operating System

Booting Pharo as OS with only 55MB ISO image. Read more and take care where you use it (best on a virtual platform to not to erase disks)

by Torsten (noreply@blogger.com) at October 16, 2014 07:36 PM

Pharo 4 progressing

Pharo 4.0 is not yet released but progressing. Beside many many other stuff some of the interesting things include:


by Torsten (noreply@blogger.com) at October 16, 2014 09:26 AM

October 12, 2014

Göran Krampe

Here Comes Nim!

I just posted an article comparing some silly benchmarks between Cog Smalltalk and LuaJIT2. Now... let's take a look at one of the latest "Cool Kids" on the language front, Nimrod - or as it has been renamed - Nim.

So I love programming languages. But generally I have preferred dynamically typed languages because I have felt that the static type systems that we have suffered with for the last ... 25 years or so has basically been completely awful. Perhaps not in implementation performance, but definitely in developer productivity and enabling good quality code.

Working in say Smalltalk has been like roaring free in the skies instead of crawling in the mud with C++, well, you get my point. I still recall one of the first 30 line "programs" I wrote in Smalltalk, just quickly jotted it down - and it ran perfectly at the first try. I attributed that to the concise language and "noise free" code (no static type noise!). My first stumblings in C++? Ha! Its not even funny.

But I have never been against static type information per se - I mean, its just a question of what we need to tell the compiler... so if we can have a type system that can reason around my code at compile time without bogging me down in endless typing and convoluted code that is hard to change and refactor and hard to read and hard to write... argh! Then yes, by all means.

Many new statically typed languages have started incorporating type inference. I haven't really worked with them, Haskell is perhaps one of the best known. Dart is a dynamically typed language but instead opted to adding mechanisms for static type annotation in order to help on the tooling front, a clever idea IMHO.

Rust, Go and... Nim!

In another article I wrote about Rust and Go (and more) - two new, popular and very interesting statically typed languages. But I admit that I totally missed Nim! Previously known as Nimrod, ehrm. And yes, Nim does have type inference, although to a balanced extent.

Just like LuaJIT2 has been a "Tour de Force" of a single author, so has Nim - and that is interesting in itself. I would say it has been a pattern over the years with most successful open source languages.

Ok... so enough chit chatting, let's look at that silly benchmark I played with earlier - in Gang Nim Style:

Nim silly benchmark
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
</p>

<h1>Import times for time measurements and future for some upcoming features (naked lambdas)</h1>

<p>import times, future</p>

<h1>Just a recursive function, the astute reader notes that we declare a return type of int64</h1>

<p>proc benchFib(fib) :int64 =
  if fib &lt; 2:</p>

<pre><code>return 1
</code></pre>

<p>  else:</p>

<pre><code>return 1 + benchFib(fib-1) + benchFib(fib-2)
</code></pre>

<h1>Trivial helper to measure time, discard is used to show we ignore the return value</h1>

<p>proc timeToRun(bench) :float =
  var start = epochTime()
  discard bench()
  epochTime() - start</p>

<h1>And this is the bytecode benchmark translated from Squeak</h1>

<p>proc benchmark(n) :int =
  const size = 8190
  var flags: array[1..size, bool]
  for j in 1..n:</p>

<pre><code>result = 0
# Clear all to true
for q in 1..size:
  flags[q] = true
for i in 1..size:
  if flags[i]:
    let prime = i + 1
    var k = i + prime
    while k &lt;= size:
      flags[k] = false
      inc(k, prime)
    inc(result)
</code></pre>

<h1>And echo the time to run these two, note the naked lambda syntax "() => code"</h1>

<h1>The "&amp;" is string concatenation. The $ before timeToRun is "toString"</h1>

<p>echo($timeToRun(() => benchmark(100000)) &amp; " bytecode secs")
echo($timeToRun(() => benchFib(47)) &amp; " sends secs")

Whoa! Look at that code! I mean... it looks like Python-something! And not much static type noise going on, in fact there are only 4 places where I specify types - the three return types of the three procs, and I also specify that the array has bools in it. That's it.

If you recall the numbers from LuaJIT2:

10.248302 bytecode secs
26.765077 send secs

...and Cog:

49.606 bytecode secs
58.224 send secs

...then Nim mops the floor with them:

2.6 bytecode secs
7.6 sends secs

So Nim is about 4x faster on bytecode speed and 3x faster on recursive calls compared to LuaJIT2. And about 20x faster on bytecode speed and 8x faster on recursive calls compared to Cog.

A few things of note:

Conclusion

I wrote this as a teaser - Nim is actually a brutally darn cool language. Lots of people seem to prefer it over Rust - and it has a "pragmatic" soul - Rust on the other hand is very focused on safety, safety, safety. Nimrod is more focused on developer productivity. And it has a lot of nice stuff:

I would say that if you are thinking of perhaps taking a peek at the "Dark Side" (static typing) - then this is it. I think Nim will make you smile in more ways than one.

October 12, 2014 10:00 PM

Cog vs LuaJIT2

In the open source Smalltalk community we have a pretty fast VM these days - its called Cog and is written by the highly gifted and experienced Eliot Miranda who also happens to be a really nice guy! Cog is fast and its also still improving with some more developers joining recently.

Another very fast VM is LuaJIT2 for the Lua language (version 5.1), also written by a single individual with extraordinary programming talent - Mike Pall. LuaJIT2 is often mentioned as the fastest dynamically typed language (or VM) and even though Lua is similar to Smalltalk (well, its actually very similar to Javascript) its also clearly a different beast with other characteristics. If you start looking at the world of game development - then Lua appears everywhere.

I am a language lover but I admit to having only glanced at Lua earlier and wrongfully dismissed it in the category of "less capable and quirky languages". Now that I have looked closer I realize its a similar "hidden gem" as Smalltalk is! And just as Smalltalk was given a boost of interest when Ruby hit the scene - I guess Lua gets an influx now with the Javascript craze. And the gaming world keeps infusing it with fresh code.

Lies and, well basically just lies...

In Squeak Smalltalk we have this silly little benchmark called tinyBenchmarks. It's not for benchmarking. No, let me repeat that - it really is not.

But hey, let's just ignore that for a second. :) And oh, comparing Lua vs Smalltalk - speed is only a tiny piece of the picture - there are other much more interesting characteristics of these two eco systems that would affect any kind of "choice":

So Smalltalk excels in interactive development - but you are often limited in integration options... while Lua excels as a dynamic language thriving as a catalyst in the C and C++ eco system.

These languages and their tools were simply born from two very different sets of goals. And Lua is also different from say Python, since Lua in contrast was designed as a minimal language (in many ways like Smalltalk was) and a language meant for embedding in a C/C++ application (almost no batteries included). This carves out a fairly unique niche for Lua.

Ok, so back to tinyBenchmarks. It consists of two small snippets of code, one measures "bytecode speed" by counting primes in a few nested loops and another is a recursive function similar to Fibonacci that just counts the number of recursive calls. Ok, so... latest Cog (binary VM.r3000) vs latest LuaJIT2 (2.1.0-alpha), here follows the Lua code I whipped up. I tried basically three different variants on the Fibonacci recursion to see how much penalty an OO design would give.

Lua silly benchmark
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
local class = require("classy")</p>

<p>-- First here is how you would do it in proper Lua, just a recursive function
local function benchFib(fib)
  if fib &lt; 2 then</p>

<pre><code>return 1
</code></pre>

<p>  end
  return 1 + benchFib(fib-1) + benchFib(fib-2)
end</p>

<p>-- Or using a metatable for a bit more manual OO style
local Bench = {}
Bench.__index = Bench</p>

<p>-- A constructor
function Bench.new()
  local bench = {}
  setmetatable(bench, Bench)
  return bench
end</p>

<p>-- And a method in it
function Bench:benchFib(fib)
  if fib &lt; 2 then</p>

<pre><code>return 1
</code></pre>

<p>  end
  return self:benchFib(fib-1) + self:benchFib(fib-2) + 1
end</p>

<p>-- A variant using the "Classy" OO lib. Another popular is called "MiddleClass"
local Benchy = class("Benchy")</p>

<p>-- And a method in it
function Benchy:benchFib(fib)
  if fib &lt; 2 then</p>

<pre><code>return 1
</code></pre>

<p>  end
  return self:benchFib(fib-1) + self:benchFib(fib-2) + 1
end</p>

<p>-- And this is the bytecode benchmark translated just as it says in Squeak
local function benchmark(n)
  local size = 8190
  local count = 0
  for j=1,n do</p>

<pre><code>count = 0
local flags = {}
for q=1,size do
  flags[q] = true
end
for i=1,size do
  if flags[i] then
    local prime = i+1
    local k = i + prime
    while k &lt;= size do
      flags[k] = false
      k = k + prime
    end
    count = count + 1
  end
end
</code></pre>

<p>  end
  return count
end</p>

<p>-- Return seconds to run fn
local function timeToRun(fn)
  local start = os.clock()
  fn()
  return os.clock() - start
end</p>

<p>t1 = timeToRun(function() benchmark(100000) end)
t2 = timeToRun(function() benchFib(47) end)
t3 = timeToRun(function() Bench.new():benchFib(47) end)
t4 = timeToRun(function() Benchy():benchFib(47) end)</p>

<p>print(t1 .. ' bytecode secs')
print(t2 .. ' benchFib send secs (normal Lua)')
print(t3 .. ' Bench send secs (OO Lua)')
print(t4 .. ' Benchy send secs (OO Lua Classy)')

And the Smalltalk code would be:

Squeak silly benchmark
1
2
Transcript show: ([100000 benchmark] timeToRun / 1000.0) asString, ' bytecode secs';cr.
Transcript show: ([47 benchFib] timeToRun / 1000.0) asString, ' send secs';cr@

I picked 100000 and 47 to get fairly long running times, so LuaJIT:

10.248302 bytecode secs
26.765077 benchFib send secs (normal Lua)
70.418739 Bench send secs (OO Lua)
71.003568 Benchy send secs (OO Lua Classy)

...and Cog:

49.606 bytecode secs
58.224 send secs

So LuaJIT2 is about 4x faster on bytecode speed and 2x faster on recursive calls.

But wait, lots of things to note here:

Conclusion

There is no real conclusion from the silly benchmark - it was just a fun exercise! I already knew Cog is pretty darn fast and LuaJIT is the King of the Hill - it even beats V8 regularly. Cog on the other hand is executing a much more refined and advanced language and object system, you really do need to keep that in mind here.

But I hope that especially Smalltalkers might get intrigued by this article and find Lua interesting. To me its much nicer than Javascript. Its also the first time in many years that I have found a language that actually can keep my interest for a longer period - despite me constantly comparing it with my beloved Smalltalk.

Python could never keep me interested, although I did try - but it kept turning me off, so unclean, lots of different mechanisms, too complicated. Same story with Ruby, too messy, no elegance and a community drug crazed with nifty tricks... IMHO.

But Lua has that smallness that Smalltalk has, it feels "designed". It has strong abstractions that it carries through all the way. In short it doesn't turn me off. And then, a lot more in this eco system is pure candy. LuaJIT2 has awesome speed. Very powerful interoperability with C and C++, in fact, the LuaJIT FFI can call C libraries as fast as C can! Tons of good libraries and tools. Very strong on mobile development. Many interesting projects like TurboLua, Tarantool, OpenResty, Lapis, Metalua etc etc.

Always nice to realize that there still is stuff out there that can attract an old Smalltalk dog... :)

October 12, 2014 10:00 PM

October 04, 2014

Bert Freudenberg

Deconstructing Floats: frexp() and ldexp() in JavaScript

While working on my SqueakJS VM, it became necessary to deconstruct floating point numbers into their mantissa and exponent parts, and assembling them again. Peeking into the C sources of the regular VM, I saw they use the frexp() and ldexp() functions found in the standard C math library.

Unfortunately, JavaScript does not provide these two functions. But surely there must have been someone who needed these before me, right? Sure enough, a Google search came up with a few implementations. However, an hour later I was convinced none of them actually are fully equivalent to the C functions. They were imprecise, that is, deconstructing a float using frexp() and reconstructing it with ldexp() did not result in the original value. But that is the basic use case: for all float values, if

[mantissa, exponent] = frexp(value)
then
value = ldexp(mantissa, exponent)
even if the value is subnormal. None of the implementations (even the complex ones) really worked.

I had to implement it myself, and here is my implementation (also as JSFiddle):
function frexp(value) {
    if (value === 0) return [value, 0];
    var data = new DataView(new ArrayBuffer(8));
    data.setFloat64(0, value);
    var bits = (data.getUint32(0) >>> 20) & 0x7FF;
    if (bits === 0) {
        data.setFloat64(0, value * Math.pow(2, 64));
        bits = ((data.getUint32(0) >>> 20) & 0x7FF) - 64;
    }
    var exponent = bits - 1022,
        mantissa = ldexp(value, -exponent);
    return [mantissa, exponent];
}


function ldexp(mantissa, exponent) {
    return return exponent > 1023 // avoid multiplying by infinity

        ? mantissa * Math.pow(2, 1023) * Math.pow(2, exponent - 1023)
        : exponent < -1074 // avoid multiplying by zero
        ? mantissa * Math.pow(2, -1074) * Math.pow(2, exponent + 1074)
        : mantissa * Math.pow(2, exponent);
}
My frexp() uses a DataView to extract the exponent bits of the IEEE-754 float representation. If those bits are 0 then it is a subnormal. In that case I normalize it by multiplying with 264, getting the bits again, and subtracting 64. After applying the bias, the exponent is ready, and used to get the mantissa by canceling out the exponent from the original value.

My ldexp() is pretty straight-forward, except it needs to be able to multiply by very large and very small numbers. The smallest positive float is 0.5-1073, and to get its mantissa we need to to multiply with 21073. That is larger then the largest float 21023. By multiplying in two steps we can deal with that.

So there you have it. Hope it's useful to someone. And here is the version I put into SqueakJS, if you're curious.

Correction: The code I originally posted here for ldexp() still had a bug, it did not test for too small exponents. Here is the fix.

by Bert (noreply@blogger.com) at October 04, 2014 11:17 PM

October 01, 2014

Torsten Bergmann

ESUG 2014 Photos

Photos from European Smalltalk User Group (ESUG) 2014 conference can be found here.

by Torsten (noreply@blogger.com) at October 01, 2014 06:14 AM

Smalltalk news on all sides

While I often blog about news in the open source Smalltalk scene one additionally has to note that the commercial Smalltalk vendors are also playing well. Interesting news from Instantiations about upcoming VASmalltalk and similar news from Cincom about CincomSmalltalk appear regulary on the web. Nice!

by Torsten (noreply@blogger.com) at October 01, 2014 06:12 AM

Gilad Bracha

A DOMain of Shadows

One of the advantages of an internal DSL  over an external one is that you can leverage the full power of a general purpose programming language. If you create an external DSL, you may need to reinvent a slew of mechanisms that a good general purpose language would have provided you: things like modularity, inheritance, control flow and procedural abstraction.

In practice, it is unlikely that the designer of the DSL has the resources or the expertise to reinvent and reimplement all these, so the DSL is likely to be somewhat lobotomized. It may lack the facilities above entirely, or it may have very restricted versions of some of them. These restricted versions are mere shadows of the real thing; you could say that the DSL designer has created a shadow world.
I discussed this phenomenon as part of a talk I gave at Onward in 2013. This post focuses on a small part of that talk.

Here are three examples that might not always be thought of as DSLs at all, but definitely introduce a shadow world.

Shadow World 1: The module system of Standard ML.

ML modules contain type definitions. To avoid the undecidable horrors of a type of types, ML is stratified.  There is the strata of values, which is essentially a sugared lambda calculus. Then there is the stratum of modules and types. Modules are called structures, and are just records of  values and types. They are really shadow records, because at this level, by design, you can no longer perform general purpose computation. Of course, being a statically typed language, one wants to describe the types of structures. ML defines signatures for this purpose. These are shadow record types. You cannot use them to describe the types of ordinary variables.

It turns out one still wants to abstract over structures, much as one would over ordinary values. This is necessary when one wants to define parameterized modules.  However, you can’t do that with ordinary functions. ML addresses this by introducing functors, which are shadow functions. Functors can take and return structures, typed as signatures. However, functors cannot take or return functors, nor can they be recursive, directly or indirectly (otherwise we’d back to the potentially non-terminating compiler the designers of ML were trying so hard to avoid in the first place).

This means that modules can never be mutually recursive, which is unfortunate since this turns out to be a primary requirement for modularity. It isn’t a coincidence that we use circuits for electrical systems and communication systems, to name two prominent examples.  

It also means that we can’t use the power of higher order functions to structure our modules. Given that the whole language is predicated on higher order functions as the main structuring device, this is oddly ironic.

There is a lot of published research on overcoming these limitations. There are papers about supporting restricted forms of mutual recursion among ML modules.  There are papers about allowing higher-order functors. There are papers about combining them. These papers are extremely ingenious and the people who wrote them are absolutely brilliant. But these papers are also mind-bogglingly complex.  

I believe it would be much better to simply treat modules as ordinary values. Then, either forego types as module elements entirely (as in Newspeak)  or live with the potential of an infinite loop in the compiler. As a practical matter, you can set a time or depth limit in the compiler rather than insist on decidability.  I see this as a pretty clear cut case for first class values rather than shadow worlds.

Shadow World 2: Polymer

Polymer is an emerging web standard that aims to bring a modicum of solace to those poor mistreated souls known as web programmers. In particular, it aims to allow them to use component based UIs in a standardized way.

In the Polymer world, one can follow a clean MVC style separation for views from controllers. The views are defined in HTML, while the controllers are defined in an actual programming language - typically Javascript, but one can also use Dart and there will no doubt be others. All this represents a big step forward for HTML, but it remains deeply unsatisfactory from a programming language viewpoint.

The thing is, you can’t really write arbitrary views in HTML. For example, maybe your view has to decide whether to show a UI element based on program logic or state. Hence you need a conditional construct. You may have heard of these: things like if statements or the ?: operator. So we have to add shadow conditionals.

<template if="{{usingForm}}">

is how you’d express  

if (usingForm) someComponent;

In a world where programmers cry havoc over having to type a semicolon, it’s interesting how people accept this. However, it isn’t the verbose, noisy syntax that is the main issue.

The conditional construct doesn’t come with an else of elsif clause, nor is their a switch or case. So if you have a series of alternatives such as

if (cond1) {ui1}
else if (cond2) {ui2}
        else {ui3}

You have to write


<template if = "{{cond1}}">
<ui1>
</template>
<template if = "{{cond2 && !cond1}}">
<ui2>
</template>
<template if = "{{cond3 && !cond2 && !cond3}"}>
<ui3>

</template>


A UI might have to display a varying number of elements, depending on the size of a data structure in the underlying program. Maybe it needs to repeat the display of a row in a database N times, depending on the amount of data. We use loops for this in real programming. So we now need shadow loops.


<template repeat = "{{task in current}}">

There’s also a for loop


<template repeat= "{{ foo, i in foos }}">

Of course one needs to access the underlying data from the controller or model, and so we need a way to reference variables. So we have shadow variables like

{{usingForm}} 

and shadow property access.

{{current.length}}

Given that we are building components, we need to use components built by others, and the conventional solution to this is imports. And so we add shadow imports.


<link rel = "import” href = "...">

UI components are a classic use case for inheritance, and polymer components support  can be derived from each other, starting with the predefined elements of the DOM, via shadow inheritance.  It is only a matter of time before someone realizes they would like to reuse properties from other components in different hierarchies via shadow mixins.

By now we’ve defined a whole shadow language, represented as a series of ad hoc constructions embedded in string-valued attributes of HTML.  A key strength of HTML is supposed to be ease-of-use for non-programmers (this is often described by the meaningless phrase declarative). Once you have added all this machinery, you’ve lost that alleged ease of use - but you don’t have a real programming language either. 

Shadow World 3: Imports

Imports themselves are a kind of shadow language even in a real programming language. Of course imports have other flaws, as I’ve discussed  here and here, but that is not my focus today. Whenever you have imports, you find demands for conditional imports, for an aliasing mechanism (import-as) for a form of iteration (wildcards).  All these mechanisms already exist in the underlying language and yet they are typically unavailable because imports are second-class constructs.

Beyond Criticism

It is very easy to criticize other people’s work. To quote Mark Twain:

I believe that the trade of critic, in literature, music, and the drama, is the most degraded of all trades, and that it has no real value 

So I had better offer some constructive alternative to these shadow languages. With respect to modularity, Newspeak is my answer. With respect to UI, something along the lines of the Hopscotch UI framework is how I’d like to tackle the problem. In that area, we still have significant work to do on data binding, which is one of the greatest strengths of polymer. In any case, I plan to devote a separate post to show how one can build an internal DSL for UI inside a clean programming language. 

The point of this post is to highlight the inherent cost of going the shadow route. Shadow worlds come in to being in various ways. One way is when we introduce second class constructs because we are reluctant to face up to the price of making something a real value. This is the case in the module and import scenarios above. Another way is when one defines an external DSL (as in the HTML/Polymer example). In all these cases, one will always find that the shadows are lacking. 


Let’s try and do better.

by Gilad Bracha (noreply@blogger.com) at October 01, 2014 05:19 AM

September 30, 2014

Torsten Bergmann

Visualize network latency using Pharo

Visualize network latency using Pharo. Read more here.

 


More infos here.

by Torsten (noreply@blogger.com) at September 30, 2014 09:04 AM

VISSOFT 2014

2nd IEEE Working Conference on Software Visualization is currently held in Victoria, CA. From the twitter posts it looks like the Pharo based agile visualization tools are interesting for the participants.

by Torsten (noreply@blogger.com) at September 30, 2014 08:50 AM

September 24, 2014

Torsten Bergmann

Smalltalk block translator

Use blocks for parsing. Read more.

by Torsten (noreply@blogger.com) at September 24, 2014 06:04 PM

Quicksilver - a Framework for Hierarchical Data Analysis in Pharo

Read the paper.

by Torsten (noreply@blogger.com) at September 24, 2014 06:03 PM

September 21, 2014

Torsten Bergmann

Calling Blender

Calling Blender using Pharo with Ephestos 0.1

by Torsten (noreply@blogger.com) at September 21, 2014 06:48 PM

PocketCube solved by DijEvolution

Dijkstra shortest path search algorithm can find solution for PocketCube using Pharo. See the video here.

by Torsten (noreply@blogger.com) at September 21, 2014 06:39 PM

September 19, 2014

Torsten Bergmann

Bootstrap (V0.12.2) for Seaside

An updates version of Bootstrap (V0.12.2) for Seaside is available. You can easily load it from the Configuration Browser in Pharo 3.0 or the project site.

Beside more tests it features vertical tabs which is a simple wrapper of a component found on the web. 

Look at the online demo for Bootstrap to see how easy one can use them: http://pharo.pharocloud.com/bootstrap/browser/Vertical%20Tabs

by Torsten (noreply@blogger.com) at September 19, 2014 08:10 AM

DataTables jQuery plugin for Seaside

If you build a web application using Seaside and Pharo, maybe using my Bootstrap wrapper project you might be interested in a good data table plugin to display tabular data.

There is a nice (commercial) jQuery plugin called DataTables. Esteban made it available as a plugin for Seaside now and describes this here.

by Torsten (noreply@blogger.com) at September 19, 2014 06:18 AM

September 17, 2014

Torsten Bergmann

Saucers 1.5

A nice little game built with Squeak using Morphic.

by Torsten (noreply@blogger.com) at September 17, 2014 07:11 PM

Pharo Sprint Lille 26th september

see here.

by Torsten (noreply@blogger.com) at September 17, 2014 02:09 PM

SciSmalltalk v0.14 is released.

Read more here and check the project here.

by Torsten (noreply@blogger.com) at September 17, 2014 09:46 AM

September 16, 2014

Torsten Bergmann

SortFunctions for Pharo

SortFunctions allow you to easily work with sorting in Smalltalk. Checkout the project at SmalltalkHub where you will also find the docu and examples.

by Torsten (noreply@blogger.com) at September 16, 2014 07:48 PM

Test Coverage with Hapao

Hapao2 for Pharo is arrived. Read more details here.

by Torsten (noreply@blogger.com) at September 16, 2014 06:03 PM

Performance enhancement of list updating operation

see here.

by Torsten (noreply@blogger.com) at September 16, 2014 06:00 PM

September 11, 2014

Squeakland News

How to do timing in Etoys?

I just got this question from my students and I thought there might be others with similar interests. So how can I control sequences of actions over time? Say, I want to use speech bubbles to tell a story. I have a number of sentences to show, one after another, and I want to let some time pass between them to give the reader time to read. How can I do that?

Well, I can build a timer and check the time to trigger actions when a certain time is reached. How can I build a timer?

First of all, you need a variable to count time steps. Open the viewer for your object. Create a variable by clicking the "v" - symbol in the top row of the viewer and give your variable a name. I choose "seconds". The default type is "Number", which is fine and 0 decimals places are perfect as well.


Now open a new empty script and drag the tiles to assign a new value to your variable into the script. Change the operation to "increase by" and the number to "1". Make sure the value of your variable is "0" at the start! Name the script "timer".


You do know already that the script, once started, will be executed repeatedly until it is being stopped, right? Do you also know how fast or how slowly this happens? You can see this when you click on the watch in the top row of the scriptor and hold the mouse button down. And you can also change it there! By default, it will be executed 8 times per second. Change this into once per second!



When you now start the script, each second it will increase the value of your variable "seconds" by 1! Now you can use the value of the variable in other scripts:


Use a all-scripts-tile from the supplies to start both, your script and the timer, at the same time and watch :)

Please note: It depends on your computer and what other programs are running on it at the same time, if a second in the Etoys project will be the same as a second at a real clock. It may not be exactly the same, but probably close. It is definitely good enough to control the flow of a story, but for scientific experiments, you should use a real timer!

There is another approach to time handling in this post from Ricardo Moran.

You can also find a tutorial to build a timer in project 6 of the book "Powerful Ideas in the classroom" by Kim Rose and B.J. Allen-Conn.

by Rita (noreply@blogger.com) at September 11, 2014 11:48 AM

September 10, 2014

Torsten Bergmann

Seaside 3.1.3

is released. Read the announcement or browse the change log.

by Torsten (noreply@blogger.com) at September 10, 2014 11:22 AM

PointerDetective

A small tool to find references to an object visually. Code is on SmalltalkHub.

Also on PharoWeekly.

by Torsten (noreply@blogger.com) at September 10, 2014 07:52 AM

OSMMaps

OSMMaps is a Pharo package to interact with OpenStreetMap. Read more here.

by Torsten (noreply@blogger.com) at September 10, 2014 06:51 AM