I'm not sure if this has been addressed elsewhere, but I'm curious: With the talk of GIMP becoming GEGL-based in future iterations, would that speed up operations (such as running a gaussian blur on a large image or rescaling multiple layers at once)? Or are those kinds of things just inherently time-consuming?
Yes, something like a Gaussian blur is very compute-intensive. Even computing the result of two overlapping layers with partial transparency is rather CPU intensive (one multiplication/pixel/color channel). And try to use the perspective tool with preview on a 12Mpx photograph...
One point in favor of GEGL is that it makes it a lot easier to offload compute-intensive tasks from the main processor (CPU) to the graphics card (GPU). The latter is mostly a large set (fom 64 to over 2000 in current high end cards) of simple arithmetic processors that can work in parallel.