Statepoints vs gcroot for representing call safepoints

I recent discussion on LLVM commits w.r.t. the statepoint changes which are up for review, I managed to get myself confused and made a couple of inaccurate statements regarding the existing capabilities of gcroots vs the newly proposed statepoints.  This post is a (hopefully correct) summary of the similarities and differences.

For the purposes of this post, I am only talking about the semantics of the collector at a source language level call site.  The issues highlighted with gc root and safepoint poll sites in my previous post still stand, but I didn’t do a very good job (in retrospect) of distinguishing between safepoints at call sites, and additional checks + runtime calls inserted to ensure that running code checks for a safepoint request at some interval.  The points in that post apply to the later; this one talks about the former.

From a functional correctness standpoint, gc.root and statepoint are equivalent.  They can both support relocating collectors, including those which relocate roots.  To prevent future confusion, let me review how each works.

gc.root uses explicit spill slots in the IR in the form of allocas.  Each alloca escapes (through the gcroot call itself); as a result, the compiler must assume that any readwrite call can both consume and update the values in question.  Additionally, the fact that all calls are readwrite prevents reordering of unrelated loads past the call.  gcroot relies on the fact that no SSA value relocated at a call site is used at a site reachable from the call.  Instead, a new SSA value (whose relation to the original is unknown by the compiler) is introduced by loading from the (potentially clobbered) alloca.  gcroot creates a single stack map table for the entire function.  It is the compiled code’s responsibility to ensure that all values in the allocas are either valid live pointers or null.

Statepoints use most of the same techniques.  We rely on not having an SSA value used on both sides of a call, but we manage the relocation via explicit IR relocation operations, not loads and stores.  We require the call to be read/write to prevent reordering of unrelated loads.  Since the spill slots are not visible in the IR, we do not need the reasoning about escapes that gc.root does.

To explicitly state this again since I screwed this up once before, both statepoints and gc.roots can correctly represent relocation semantics in the IR.  In fact, the underlying reasoning about their correctness are rather similar.

They do differ fairly substantially in the details though.  Let’s consider a few examples.

SSA vs Memory – gcroot encodes relocations as memory operations (stores, clobbering calls, loads) where statepoint uses first class SSA values.  We believe this makes optimizations more straightforward.

Consider a simple optimization for null pointer relocation.  If the optimizer manages to establish that one of the value being relocated is null, propagating this across a statepoint is straightforward.  (For each gc.relocate, if source is null, replaceAllUsesWith null.)  Implementing this same optimization for gc.root is harder since the store and load may have been reordered from immediately around the call.  This isn’t an unsolvable problem by any means, but it would be a GVN change, not an InstCombine one.  In practice, we believe InstCombine style optimizations to be advantageous since they’re simpler to write and debug.  Arguably, they’re also more powerful given the current pipeline since they have multiple opportunities to trigger.

Derived Pointers – gcroot can represent derived pointers, but only via convention.  There is no convention specified, so it’s up to the frontend to create it’s own.  Statepoints define a convention (explicitly in the relocation operation) which makes describing optimizations straight forward.

One thing we plan to do with the statepoint representation is to implement an “easily derived pointer” optimization (to run near CodeGenPrep).  On X86, it’s far cheaper to recreate a GEP base + 5 derived pointer than relocate it.  Recognizing this case is quite straight forward given the statepoint representation.

A frontend could implement a similar optimization for gcroot at IR generation time.  You could also implement such an optimization over the load/call/store representation, but the implementation would be much more complex (analogous to the null optimization above).

To be fair, gc.root may need such an optimization less.  Since call-safepoints are inserted early, CSE has not yet run.  As a result, there may be fewer “easily derived pointers” live across a call.

Format – Statepoints use a standard format.  gc.root supports custom formats.  Either could be extended to support the other without much difficulty.

The more material difference between the two is that gc.root generates a single stack map for the entire function while statepoints generate a unique stack map per call site.  Having a single stack map imposes a slight penalty on code compiled with gc.root since dead values must explicitly be removed from the alloca (by a write of null).  In the wrong situation (say a tight loop with two calls), this could be material.

Lowering – Currently, both gc.root and statepoint lower to stack slots.  gc.root does this at the IR level, statepoints does so in SelectionDAG.

The design of statepoints is intended to allow pushing the explicit relocations back through the backend.  The reason this is desirable is that pointers can be left in callee saved registers over call sites.  Without substantial re-engineering, such a thing is not possible for gc.root.  The importance of this from a performance perspective is debatable.  It is my belief that the key benefit would be in a) reducing frame sizes (by not requiring spill slots), and b) avoiding spills around calls.

An advantage of gc.root is that the backend can remain largely ignorant of the gc.root mechanism.  By the point the backend encounters them, a gc.root is just another alloca.  One potential problem with the current implementation is that the escape is lost when lowering; the gcroot call is lowered to an entry into a side table and the alloca no longer escapes.  This is a source of possible bugs, but is also a straightforward fix.

As to the lowering currently implemented, it’s debatable which is better.  Statepoints optimize constants, and unifies based on SDValue.  As a result, two IR level values of different types (with the same bit pattern) can end up sharing the same stackslot.  However, it suffers when trying to assign stack slots.  We currently use heuristics, but you can end up with ugly shuffling of values around on the stack across basic blocks.  (There’s a number of ways to improve that, but it’s not yet implemented.)  gc.root doesn’t suffer from this problem since stack slots are assigned by the frontend.

Since the stack spills and reloads are visible at the IR layer, gcroot gets the full ability of the optimizer to remove redundant reloads.  Statepoints only get to leverage the pieces in the backend.  In theory, this could result in materially worse spill/reload code for statepoints.  In practice, this appears not to matter much provided the same value is assigned to the same slot across both calls, but I don’t actually have much data here to say anything conclusively yet.

I haven’t tried to measure frame size for gc.root vs statepoints.  I suspect that statepoints may come out slightly ahead, but I doubt this is material.  There are also cases (see “easily derived pointers” above), where gc.root may come out ahead.

IR Level Optimization – Both gc.root and statepoints cripple optimization (by design!).  gcroot works better with inlining today, but statepoints could be easily enhanced to handle this case.  (The same work would benefit symbolic patchpoints.)

It is my belief that statepoints are easier to optimize (i.e. teach to LICM), but this is purely my guess with no real evidence.  Both suffer from the fact that calls must be marked readwrite.  Not having to reason about memory seems easier, but I’m open to other arguments here.

Community Support & Compatibility
From a practical perspective, statepoints have active users behind them.  We are interested in continuing to enhance and optimize them in the public tree.  The same support does not seem to exist for gcroot.

The implementation of statepoints is largely aligned with that of patchpoints.  The implementation of gcroot is completely separate and poorly understood by the majority of the community.

It wouldn’t be hard to write a translation pass from gcroot to statepoints or from statepoints to gcroot.  If folks are concerned about compatibility, this would be a reasonable option.  The largest challenge to transparently replacing one with the other is in generating the right output format.

Summary
To summarize, gcroot and statepoints are functionally equivalent (modulo possible bugs.)  In their current form, the two are largely comparable with each having some benefits.  Long term, we believe a statepoint representation will allow better code generation and IR level optimization of code with safepoints inserted.  We believe statepoints to be easier to optimize both at the IR level and backend.

Again, the late safepoint proposal is independent and could be done with either representation.  It’s currently implemented on statepoints, but it could be extended to gcroot without too much work.

Statepoint changes up for review

Last week, the first set of patches for our work on garbage collection support in LLVM hit the mailing list.  The review process will probably take a few weeks, but hopefully these should have landed by the 2014 LLVM Developers Meeting at the end of this month.  At that conference, my co-worker Sanjoy and I are going to be giving a talk about our progress on statepoints, and late safepoint placement.

Here’s the full text of the review request, along with a couple of updates:

Title: [Patch] Statepoint infrastructure for garbage collection

The attached patch implements an approach to supporting garbage collection in LLVM that has been mentioned on the mailing list a number of times by now.  There’s a couple of issues that need to be addressed before submission, but I wanted to get this up to give maximal time for review.

The statepoint intrinsics are intended to enable precise root tracking through the compiler as to support garbage collectors of all types.  Our testing to date has focused on fully relocating collectors (where pointers can change at any safepoint poll, or call site), but the infrastructure should support collectors of other styles.  The addition of the statepoint intrinsics to LLVM should have no impact on the compilation of any program which does not contain them.  There are no side tables created, no extra metadata, and no inhibited optimizations.

A statepoint works by transforming a call site (or safepoint poll site) into an explicit relocation operation.  It is the frontend’s responsibility (or eventually the safepoint insertion pass we’ve developed, but that’s not part of this patch) to ensure that any live pointer to a GC object is correctly added to the statepoint and explicitly relocated.  The relocated value is just a normal SSA value (as seen by the optimizer), so merges of relocated and unrelocated values are just normal phis.  The explicit relocation operation, the fact the statepoint is assumed to clobber all memory, and the optimizers standard semantics ensure that the relocations flow through IR optimizations correctly.

During the lowering process, we currently spill aggressively to stack.  This is not entirely ideal (and we have plans to do better), but it’s functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics.  We leverage the existing StackMap section format, which is already used by the patchpoint intrinsics, to report where pointer values live.  Unlike a patchpoint, these locations are known (by the backend) to be writeable during the call.  This enables the garbage collector to transparently read and update pointer values if required.  We do optimize lowering in certain well known cases (constant pointers, a.k.a. null, being the key one.)

There are a few areas of this patch which could use improvement:

  • The patch needs rebased against TOT.  It’s currently based against a roughly 3 week old snapshot. (FIXED)
  • The intrinsics should probably be renamed to include an “experimental” prefix.
  • The usage of Direct and Indirect location types are currently inverted as compared to the definition used by patchpoint.  This is a simple fix. (FIXED)
  • The test coverage could be improved.  Most of the tests we’ve actually been using are built on top of the safepoint insertion mechanism (not included here) and our runtime.  We need to improve the IR level tests for optimizer semantics (i.e. not doing illegal transforms), and lowering.  There are some minimal tests in place for the lowering of simple statepoints.
  • The documentation is “in progress” (to put it kindly.)  (MUCH IMPROVED, MORE TODO)
  • Many functions are missing doxygen comments
  • There’s a hack in to force the use of RSP+Offset addressing vs RBP-Offset addressing for references in the StackMap section.  This works, shouldn’t break anyone else, but should definitely be cleaned up.  The choice of addressing preference should be up to the runtime.

When reviewing, I would greatly appreciate feedback on which issues need to be fixed before submission and those which can be addressed afterwards.  It is my plan to actively maintain and enhance this infrastructure over next few months (and years).  It’s already been developed out of tree entirely too long (our fault!), and I’d like to move to incremental work in tree as quickly as feasible.

Planned enhancements after submission:

  • The ordering of arguments in statepoints is essentially historical cruft at this point.  I’m open to suggestions on how to make this more approachable.  Reordering arguments would (preferably) be a post commit action.
  • Support for relocatable pointers in callee saved registers over call sites.  This will require the notation of an explicit relocation psuedo op and support for it throughout the backend (particularly the register allocator.)
  • Optimizations for non-relocating collectors.  For example, the clobber semantics of the spill slots aren’t needed if the collector isn’t relocating roots.
  • Further optimizations to reduce the cost of spilling around each statepoint (when required at all).
  • Support for invokable statepoints.
  • Once this has baked in tree for a while, I plan to delete the existing gc_root code.  It is unsound, and essentially unused.

In addition to the enhancements to the infrastructure in the currently proposed patch, we’re also working on a number of follow up changes:

  • Verification passes to confirm that safepoints were inserted in a semantically valid way (i.e. no memory access of a value after it has been inserted)
  • A transformation pass to convert naive IR to include both safepoint polling sites, and statepoints on every non-leaf call.  This transformation pass can be used at initial IR creation time to simplify the frontend authors’ work, but is also designed to run on *fully optimized* IR, provided the initial IR meets certain (fairly loose) restrictions.
  • A transformation pass to convert normal loads and stores into user provided load and store barriers.
  • Further optimizations to reduce the number of safepoints required, and improve the infrastructure as a whole.

We’ve been working on these topics for a while, but the follow on patches aren’t quite as mature as what’s being proposed now.  Once these pieces stabilize a bit, we plan to upstream them as well.  For those who are curious, our work on those topics is available here: https://github.com/AzulSystems/llvm-late-safepoint-placement

http://reviews.llvm.org/D5683

Continuous leaky regions

I was reading Tim Harris’s 2006 tech report on his Leaky Regions work when an idea occurred to me.  I haven’t given it enough thought to establish whether it’s a good idea or not, but I thought I’d write it down and share it regardless.  🙂

A lot of the complexity and performance tradeoff in that tech report comes from the fact that entering a potentially leaky region is fairly expensive and the number of such dynamic regions is bounded.  It seems like you could use a modified write barrier to record writes within the young generation from “older” to “younger”.  With this information and a normal old/new barrier, you’d have all the information you needed to record any subset of the young generation you wanted (as long as you wanted a stack like segment.)

(I’m going to ignore the cost of the write barrier.  Let’s just assume that’s addressable.  To be clear, I’m also assuming that you’re logging the source of the reference not the target in the write barrier.)

Given this, your collections heuristics could be much simpler.  Rather than deciding to record a region in advance, you could establish a region at the point you decided it was likely profitable to collected it.  (The different-TLAB scheme suggested in the tech report would work fine combined with a profile based estimation of a functions escape rate.)  The downside is that you’d have to scan objects (recorded by the write barrier) in “older” regions of the young generation to see if they actually contained a reference to the region you’d dynamically designated.

My guess would be that this scan wouldn’t actually be that expensive.  It’s highly parallelizable and can be done concurrently with the normal mark of the dynamic region.  It also seems like this is a much easier cost function to weight against an a fairly predictable payoff.  Combined with the fact that this allows you to lazily create an infinite number of collection regions, the approach seems moderately powerful.

Of course, at the end of the day, this is only saving young generation collections.  Given how cheap those are in any modern collector, is the extra complexity of this scheme actually worth it?  Who knows.

 

 

 

 

 

 

 

IR Restrictions for Late Safepoint Placement

The late safepoint placement pass we released recently has a couple of restrictions on the IR it can handle.  I’ve described those restrictions a couple of different times now, so I figured it was time to put them up somewhere I could reference and that google might find.  A shorter version of this post will also appear in the source code shortly.

The SafepointPlacementPass will insert safepoint polls for method entry and loop backedges.  It will also transform calls to non-leaf functions to statepoints.  The former are how the application (mutator) code interacts with the garbage collector and may actually trigger object relocation.  The latter are necessary so that polls in called functions can inspect and modify frames further up the stack.

The current SafepointPlacementPass works for nearly arbitrary IR.  Fundamentally, we require that:

  • Pointer values may not be cast to integers and back.
  • Pointers to garbage collected objects must be tagged with address space #1

In addition to these fundamental limitations, we currently do not support:

  • safepoints at invokes (as opposed to calls)
  • use of indirectbr
  • aggregate types which contain pointers to GC objects
  • pointers to GC objects stored in global variables, allocas, or at constant addresses
  • constant pointers to garbage collected objects (other than null)
  • garbage collected pointers which are undefined (“undef”)
  • use of gc_root

Patches welcome for the later class of items.  I don’t know of any fundamental reasons they couldn’t be supported.

 

Fundamentally, a precise garbage collector must be able to accurately identify which values are pointers to garbage collected objects.  We choose to use the distinction between pointer types and non-pointer types in the IR to establish that a particular value is a pointer and use the address space mechanism to distinguish between pointers to garbage collected and non-garbage collected objects.  We don’t require that the types of pointers be precise – in LLVM this would not be a safe assumption! – but we do require that the pointer be a pointer.

We disallow inttoptr instructions, and addrspacecast instructions in an effort to ensure this distinction is upheld.  Otherwise, you could have code like the following:

Object* p = …;
int x = (int)p;
foo(); <– becomes a safepoint, can move objects
Object* p2 = (Object*)x;

Note that while the SafepointPlacementPass will try to check for some violations of this assumption, it will not catch all cases.  At the end of the day, it is the responsibility of the frontend author to get this right.

 

Now on to the various implementation restrictions.

  • We plan to support safepoints on InvokeInsts.  In fact, the released code already has partial support for this.  This is not a high priority for us at the moment, but should be fairly straight forward to complete if anyone is interested.
  • IndirectBr creates problems for the LoopSimplify pass which we use as a helper for identifying backedges in loops.  Our source language doesn’t have any need for indirect branches, but if anyone can identify a better way to detect backedges which doesn’t involve this restriction, we’d gladly take the patch.
  • Currently, we not support finding pointers to garbage collected objects contained in first class aggregate types in the IR.  The extensions required to support this are fairly straight forward, but we have no need for this functionality.  Well structured patches are welcome, but since this will be a fairly invasive change, please coordinate the merge early and closely.  (Alternatively, wait until this has been merged into upstream LLVM and use the standard incremental review and commit process.)
  • Note that we have no plans to support untagged unions containing pointers.  We could support tagged pointers, but this would require either extensions to the IR, or language specific hooks exposed in the SafepointPlacementPass.  If you’re interested in this topic, please contact me directly.
  • The support for pointers to GC objects in global variables, allocas, or arbitrary constant memory locations is weak at best.  There’s some code intended to support these cases, but tests are lacking and the code is likely to be buggy.  Patches are welcome.
  • We do not support constants pointers to garbage collected objects other than null.  For a relocating garbage collector, such constant pointers wouldn’t make sense.  If you’re  interested in supporting non-relocating collectors or relocating collectors with pinned objects, some extensions may be necessary.
  • We have not integrated the late safepoint placement approach with the existing gcroot mechanism.  Given this mechanism is simply broken, we do not plan to do so.  Instead, we plan to simply remove that support once late safepoint placement lands.  If you’re interested in migrating from one approach to the other, please contact me directly.  I’ve got some ideas on how to make this easy using custom transform passes, but don’t plan on investing any time in this unless requested by interesting parties.

 

Code for late safepoint placement available

This post contains the text of an email I sent to the LLVMdev mailing list a few moments ago.  I would encourage you to direct technical questions and comments to that thread, though I will also respond to technical questions in comments posted here.

As I’ve mentioned on the mailing list a couple of times over the last few months, we’ve been working on an approach for supporting precise fully relocating garbage collection in LLVM.  I am happy to announce that we now have a version of the code available for public view and discussion.

https://github.com/AzulSystems/llvm-late-safepoint-placement

Our goal is to eventually see this merged into the LLVM tree.  There’s a fair amount of cleanup that needs to happen before that point, but we are actively working towards that eventual goal.

Please note that there are a couple of known issues with the current version (see the README).  This is best considered a proof of concept implementation and is not yet ready for production use.  We will be addressing the remaining issues over the next few weeks and will be sharing updates as they occur.

In the meantime, I’d like to get the discussion started on how these changes will eventually land in tree.  Part of the reason for sharing the code in an early state is to be able to build a history of working in the open, and to to able to merge minor fixes into the main LLVM repository before trying to upstream the core changes.  We are aware this is a fairly major change set and are happy to work within the community process in that regard.

I’ve included a list of specific questions I know we’d like to get feedback on, but general comments or questions are also very welcome.

Open Topics:

  • How should we factor the core GC support for review?  Our current intent is to separate logically distinct pieces, and share each layer one at a time.  (e.g. first infrastructure enhancements, then intrinsics and codegen support, then verifiers, then safepoint insertion passes)  Is this the right approach?
  • How configurable does the GC support need to be for inclusion in LLVM?  Currently, we expect the frontend to mark GC pointers using address spaces.  Do we need to support alternate mechanisms?  If so, what interface should this take?
  • How should we approach removing the existing partial support for garbage collection? (gcroot)  Do we want to support both going forward?  Do we need to provide a forward migration path in bitcode?  Given the usage is generally though MCJIT, we would prefer we simply deprecate the existing gcroot support and target it for complete removal a couple of releases down the road..
  • What programmatic interface should we present at the IR level and where should it live?  We’re moving towards a CallSite like interface for statepoints, gc_relocates, and gc_results call sites.  Is this the right approach?  If so, should it live in the IR subtree, or Support?  (Note: The current code is only about 40% migrated to the new interface.)
  • To support invokable calls with safepoints, we need to make the statepoint intrinsic invokable.  This is new for intrinsics in LLVM.  Is there any reason that InvokeInst must be a subclass of CallInst? (rather than a view over either calls or invokes like CallSite)  Would changes to support invokable intrinsics be accepted upstream?  Alternate approaches are welcome.
  • Is the concept of an abstract VM state something LLVM should know about?  If so, how should it be represented?  We’re actively exploring this topic, but don’t have strong opinions on the topic yet.
  • Our statepoint shares a lot in the way of implementation and semantics with patchpoint and stackmap.  Is it better to submit new intrinsics, or try to identify a single intrinsic which could represent both?  Our current feeling is to keep them separate semantically, but share implementation where possible.

Yours,
Philip (& team)

p.s. Sanjoy, one of my co-workers,  will be helping to answer questions as they arise.

p.p.s. For those wondering why the current gcroot mechanism isn’t sufficient, I covered that in a previous blog post:
[1] http://www.philipreames.com/Blog/2014/02/21/why-not-use-gcroot/

Best Practice: Stabilize tests before adding them to a test suite

One common mistake to make when adding a new test to an existing test suite is to accidentally add a test which fails in a non-deterministic manner.  Tests which spuriously fail waste developers time and degrade their trust in the test suite as a whole.  To avoid this, before adding a new test to the test suite, the test should be stabilized or burnt in.  The simplest way to do this is simply to run the test a large number of times, preferably on several different machines.

In practice, I find that running a few thousand iterations (say 5-7 thousand) tends to expose most issues.  Frankly, if a test fails less than 1 time in several thousand, that’s probably better than most of the tests already in your test suite.

It should go without saying that this step can be and should be automated.  Ideally, the stabilization step would be build directly into the test suite management tool, but in practice I’ve rarely seen this be the case.

This technique is applicable for most any type of test, but tends to be most relevant when applied to integration tests at either the module or application level.  Hopefully, your small unit tests don’t have enough non-determinism to make stabilization necessary.

If anyone is curious, the trigger for this post is that I forgot to do this with a new test recently and it bit me.  🙂

Feature latency

The primary purpose of this post is to share one of the most insightful articles on engineering management and productivity that I’ve seen in a while.

Speeding Up Your Engineering Org, Part I: Beyond the Cost Center Mentality
by Edmund Jorgensen of Hut 8 Labs

Edmund makes a very important, but entirely non-obvious until someone says it, point about the value which comes from how quickly an engineering organization can deliver on any particular project. While he doesn’t state it specifically, it’s good to remember that delivery in this context doesn’t have to be a fully shipped product feature; instead, it can be the knowledge required to make a decision about which of a set of options to pursue.  (He does get at this when talking about iteration and the CEO situation, but never says it outright.)

To build on his ideas, here’s a few concrete things I’ve seen that contribute to feature latency:

  • Long build and test cycles, particularly when bugs can be found late in the process.
  • Poor testing and quality assurance infrastructure or culture – developers become afraid to make changes quickly, most often because management blames the engineer for “not being careful enough” when there’s a systemic problem
  • Complexity of the code base – designing a new feature is harder and more time consuming, getting it right (without unexpected interactions on other things) becomes harder
  • Bit rot – if making changes in the code base is a chore, development slows down.  If engineers can’t understand the code, development slows down.
  • Non-idiomatic code – when development practices within an organization diverge heavily from industry best practice, bringing new employees up to speed becomes more expensive.  This makes scaling harder (which effects mostly larger projects), but also saps resources in the form of required training.
  • Poor specification – if you build the wrong thing and have to redo it, you’ve doubled latency.  Note that iteration is the right answer here, not “perfect” specs.
  • Manual processes – This is basically the queueing example from Edmund’s writeup.  This is really, really common.
  • Failure to address general quality issues – if an engineering team is devoting all their time to stomping customer bugs, feature development stalls

I’m sure there are others, but these are the ones that came immediately to mind.  Comments pointing out others are more than welcome.

 

 

Semantics of 2’s completement integer division

As part of a recent discussion on llvmdev, I had reason to go digging through a couple of language specs examining how they handle edge cases in integer division.  This post is just a summary, mostly so I don’t forget.

As a reminder of the basic problem, division (a/b) on N bit 2’s complement integers (i.e. the ones used on any modern machine) has two edge cases.  They are:

  1. Division by Zero ( a = don’t care, b = 0) — Division by zero is undefined on any integer domain.  Note that this is an issue for both unsigned and signed math.
  2. Overflow (a = minimum int, b = -1) — This one is slightly less obvious.  The problem arises from the fact that there is one more negative number than there is positive number in the N-bit 2’s complement representation.  (You need one bit pattern to represent 0 itself.)  As a result, there is no exact answer to the negation of the most negative integer.  This one only applies to signed arithmetic.

Now, a quick survey of languages.  If I find a language where Div by Zero yields different results for signed and unsigned types, I’ll explicitly note that.  I don’t know of one currently.  I’ve included links to specifications where possible.

C 11

Overflow is undefined.
Division by Zero is undefined.

C++ 11

Same as C 11

Java

Div by Zero – throws a ArithmeticException
Overflow – result is minimum integer (i.e. no exception is thrown)

Go Lang

Div by Zero – run-time panic
Overflow – result is minimum integer

Rust Lang

Unspecified.  “/” calls “std::ops::Div“.  Neither specifies the actual semantics.

Note that Rust supports both fixed width signed and unsigned types, as well as machine specific integer signed and unsigned types.

Python

Div by Zero – throws ZeroDivisionError
Overflow – the result type overflows to a bignum representation, thus the operation is defined in terms of mathematical integers.

JavaScript

JavaScript does not support N-bit integer division.  Instead, floating point division is performed for integer operands.  As a result, Division by Zero returns either NaN or positive infinity (depending on the value of “a”).  Floating point has radically different overflow (and other error) cases.

What the rules are for ASM.js appears under specified.

Julia Lang

Unspecified.  Not that other integer overflow is defined, but division is not mentioned.

 

If I’ve gotten any of these wrong, or there are interesting languages I’ve missed, please let me know.  I’d particular welcome solid references for Ruby, Lisp, Smalltalk, or other historical languages.

When is speech truly free?

This was originally written about two weeks ago – right after Mr Eich’s resignation was announced – but due to travel and lack of internet at home at the moment, posting has been a bit delayed.

Over the last few weeks, I have watched with something akin to horror as a man – whom by all reports is utterly qualified for the position he held – has been vilified and driven from an organization which claims to support free speech on the internet due to a political donation he made.

The entire situation has been shameful.

For those who haven’t been following the tech news, I refer to the case of Brendan Eich who used to be the CEO of Mozilla.  He is the inventor of JavaScript, co-founded Mozilla, and has served its CTO for several years.  Apparently, he made a donation to the campaign in support of Proposition 8 a few years back.

He was recently promoted to CEO, and the internet immediately erupted with outrage.  From what I can find, none of the outrage has been over his qualifications or any professional act.  Instead, it all focused on his political beliefs and speech.  The snowballing situation has driven him to resign from both his position and all future role with Mozilla.

We claim to believe in free speech.  Free speech is not only the speech we agree with.  Free speech is the ability to say what is unpopular, distasteful, and hateful.

Brendan Eich’s donation was free speech.  His speech had no impact on his qualification to be CEO of Mozilla.  The fact that public outrage has driven him from that position is shameful and hypocritical – particularly for an organization like Mozilla.

Without Freedom of Thought, there can be no such thing as Wisdom; and no such thing as public Liberty, without Freedom of Speech. — Ben Franklin

“I disapprove of what you say, but I will defend to the death your right to say it” — Evelyn Beatrice Hall

I would remind you that up until *very* recently, political speech in support of gay rights fit this situation.  Are we really so ready to abandon the moral and legal principals we staunchly defended so recently?

Late Safepoint Placement: An Update

A couple of weeks ago, I promised further detail on the late safepoint placement approach.  Since that hasn’t developed – yet – I wanted to give a small update.

All along, we’ve had two designs in mind for representing safepoints.  One was “clearly the right one” for long term usage, but was a bit more complicated to implement.  The other was “good enough” for the moment – we thought – and allowed us to prototype the design quickly.

Not too long after my last post, “good enough” stopped being good enough.  🙂

Over the last few weeks, we’ve been rearchitecting our prototype and exploring all the unexpected corner cases.  Nothing too major to date, but I wanted to hold off on describing things in detail before we had some actual hands on experience.  Once things settles out, I’ll take the time to write it up and share it.

So, in other words, please be patient for a bit longer.. 🙂