One solution is to read the entire contents of the file as a string of characters with FSCANF, split the string into individual cells at the points where newline characters occur using MAT2CELL, remove extra white space on the ends with STRTRIM, then process the string data in each cell as needed. For example, using this sample text file 'junk.txt':
hi
hello
1 2 3
FF 00 FF
12 A6 22 20 20 20
FF FF FF
The following code will put each line in a cell of a cell array cellData:
>> fid = fopen('junk.txt','r');
>> strData = fscanf(fid,'%c');
>> fclose(fid);
>> nCharPerLine = diff([0 find(strData == char(10)) numel(strData)]);
>> cellData = strtrim(mat2cell(strData,1,nCharPerLine))
cellData =
'hi' 'hello' '1 2 3' 'FF 00 FF' '12 A6 22 20 20 20' 'FF FF FF'
Now if you want to convert all of the hexadecimal data (lines 3 through 6 in my sample data file) from strings to vectors of numbers, you can use CELLFUN and SSCANF like so:
>> cellData(3:end) = cellfun(@(s) {sscanf(s,'%x',[1 inf])},cellData(3:end));
>> cellData{3:end} %# Display contents
ans =
1 2 3
ans =
255 0 255
ans =
18 166 34 32 32 32
ans =
255 255 255
NOTE: Since you are dealing with such large arrays, you will have to be mindful of the amount of memory being used by your variables. The above solution is vectorized, but may take up a lot of memory. You may have to overwrite or clear large variables like strData when you create cellData. Alternatively, you could loop over the elements in nCharPerLine and individually process each segment of the larger string strData into the vectors you need, which you can preallocate now that you know how many lines of data you have (i.e. nDataLines = numel(nCharPerLine)-nHeaderLines;).