0
votes

I have a node js application that does some image processing on large files using sharp, which in turn uses nan to interface with node. When I load a very large image I get an error from nan that says

node: ../node_modules/nan/nan.h:679: Nan::MaybeLocal<v8::Object> Nan::NewBuffer(char*, size_t, node::Buffer::FreeCallback, void*): Assertion `length <= imp::kMaxLength && "too large buffer"' failed. Aborted (core dumped)

You can see line 679 of nan.h here

But in summary it says this:

// arbitrary buffer lengths requires // NODE_MODULE_VERSION >= IOJS_3_0_MODULE_VERSION assert(length <= imp::kMaxLength && "too large buffer");

I have

$ node -v v4.4.6

Which at the top of the file you can see should be a later version than IOJS_3_0_MODULE_VERSION, providing arbitrary length buffers. However, the assert is not surrounded by #ifdefs. Does anyone know how to use arbitrary length buffers when using nan?

1
Does it have to be a buffer? libvips (the image processing library that sharp uses) can process many formats directly from disk files without loading them into memory.jcupitt
For this application, I'd rather it be a buffer yes.matth
@lovell-fuller, any ideas?jcupitt

1 Answers

1
votes

The NAN maintainers want to provide uniform behavior across all versions of node, which seems to mean sticking to the limitations of earlier versions of node (discussion). I assume that's why there's no #ifdefs around it that would enable big buffers for newer versions.

If you allocate the memory for the Buffer without using NAN, then you can go up to the current kMaxLength of 0x7fffffff (i.e. an extra gigabyte over what NAN restricts it to).

size_t len = 0x7fffffff;
char* bigbuf = (char*)malloc(len);
node::Buffer::New(v8::Isolate::GetCurrent(), bigbuf, len);

I'm not sure where in your pipeline you're hitting this -- is it when you're reading from disk? Some of these techniques may be useful:

  • Processing your data in a stream.
  • Doing the I/O from C++.
  • Reading the data from the file chunk-wise. kMaxLength is the maximum index, not the maximum amount of memory that can be used. So, if you can read your big buffer into a wider TypedArray from either C++ or within the stream data handler (using typedarray.set), then you could return e.g. a 0x7fffffff-length Uint32Array that consumes 8,589,934,588 bytes.

    // Not tested
    var fs = require("fs");
    var path = "/path/to/file";
    var size = fs.statSync(path).size;
    var dest = new Uint32Array(size / Uint32Array.BYTES_PER_ELEMENT);
    var destOffset = 0;
    var rs = fs.createReadStream(path);
    rs.on("data", function (chunk) {
        var hunk = new Uint32Array(chunk.buffer, chunk.byteOffset, b.byteLength);
         dest.set(hunk, destOffset);
         destOffset += hunk.length;
    });