[clug] World's first open movie! "Elephants Dream"

Arafangion thestar at fussycoder.id.au
Tue May 30 15:04:58 GMT 2006


Ben wrote:

> Thanks, I can't believe I missed this. Absolutely Stunning.
>
> I was flicking through the production notes and noticed this:
>
> "While running Blender in debug mode and finally getting a crash, I
> discovered that memory allocation addresses were suspiciously growing
> into the 0xFFFFFFFF range, or in other words; the entire memory space
> was in use! Our systems have 2.5 GB memory, and this project was only
> allocating like 1.5 of it.
>
> "To my big dismay it appeared that OSX only assigns processes a memory
> space of 2 GB!
> [...]
> "Now I can already see the Linuxers smirk! Yes indeed, doing renders
> on our Linux stations just went smooth and without problems. Linux
> starts with memory allocations somewhere in the lower half, and will
> easily address up to 3 GB or more.
>
> They found a workaround for the OSX issue, but it's much nicer when
> things just work out of the box *smirk*. From the comments:
>
> "Ton, this is not just a Mac OS X problem, I got it also in Windows,
> when I got my 2GB of RAM and tried rendering a scene that loaded 1.5GB
> of RAM and it was fragmented. Blender crashed with a segfault, no
> other errors. 

As you are aware, memory can be pointed to by a 32-bit pointer. There is
a limit to how many unique memory references there are in this.

I think you will find that it is also a problem in Linux - just by
default the kernel uses a 3:1 kernel memory split.
Mac OS X apparently has a 2:2 split, as does windows.

Some redhat distributions also use a 2:2 split.

In other words, try using more than 3GB of memory in a single process in
linux and see what you get :)

Linux also allows other workarounds, if you require a 2:2 split, or if
you require the ability to address up to 64T of memory, you can get that
too, but you may/will have to compile a special kernel for that, and/or
use greater than 32-bit pointers. (Such as should be the case on most
64-bit systems, afaik.)

<snip>



More information about the linux mailing list