The csv.reader
won't read the whole file into memory. It lazily iterates over the file, line by line, as you iterate over the reader
object. So you can just use the reader
as you normally would, but break
from your iteration after you're read however many lines you want to read. You can see this in the C-code used to implement the reader
object.
Initializer for the reader objecT:
static PyObject *
csv_reader(PyObject *module, PyObject *args, PyObject *keyword_args)
{
PyObject * iterator, * dialect = NULL;
ReaderObj * self = PyObject_GC_New(ReaderObj, &Reader_Type);
if (!self)
return NULL;
self->dialect = NULL;
self->fields = NULL;
self->input_iter = NULL;
self->field = NULL;
self->input_iter = PyObject_GetIter(iterator);
if (self->input_iter == NULL) {
PyErr_SetString(PyExc_TypeError,
"argument 1 must be an iterator");
Py_DECREF(self);
return NULL;
}
static PyObject *
Reader_iternext(ReaderObj *self)
{
PyObject *fields = NULL;
Py_UCS4 c;
Py_ssize_t pos, linelen;
unsigned int kind;
void *data;
PyObject *lineobj;
if (parse_reset(self) < 0)
return NULL;
do {
lineobj = PyIter_Next(self->input_iter);
if (lineobj == NULL) {
if (!PyErr_Occurred() && (self->field_len != 0 ||
self->state == IN_QUOTED_FIELD)) {
if (self->dialect->strict)
PyErr_SetString(_csvstate_global->error_obj,
"unexpected end of data");
else if (parse_save_field(self) >= 0)
break;
}
return NULL;
}
As you can see, next(reader_object)
calls next(file_object)
internally. So you're iterating over both line by line, without reading the entire thing into memory.
buffering
argument on the file object you use to create the reader? All the Python objects support lazy evaluation without further effort, you just want to make sure the file doesn't try to pull the whole content into memory. – Peter DeGlopper