[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Octave-bug-tracker] [bug #42579] textscan error out of memory or dimens
From: |
Markus Bergholz |
Subject: |
[Octave-bug-tracker] [bug #42579] textscan error out of memory or dimension too large |
Date: |
Fri, 20 Jun 2014 18:07:03 +0000 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:30.0) Gecko/20100101 Firefox/30.0 |
Follow-up Comment #3, bug #42579 (project octave):
dlmread is a c++ written function, textscan is a .m written function. It could
be possible that you'll hit the out of memory border with textscan much
earlier than with dlmread.
First, you should try to split the file in a smaller one
split -l 1000 file.name
This should split the file in sevaral 1000 lines long file. Try to read one of
those with your methode.
When it works, then it is imho not a bug in textscan.
If you can read all small files into you workspace in sequence, than maybe
just a textscan rewrite in C++ can be fix that.
Second, you could try csv2cell from io package too. It is C++ written OCT
function, but has some limitations too. Or maybe use fread and regexp...or any
other function which is not a script function.
Third, for example, I can't load a 1471219 row x 23 columns csv file into
memory (64bit Linux, 8GB ram) too (OOM). Than you have to load those big data
into a database and just access the data you really need for the moment.
_______________________________________________________
Reply to this item at:
<http://savannah.gnu.org/bugs/?42579>
_______________________________________________
Message sent via/by Savannah
http://savannah.gnu.org/