Looks like it gets messed with due to 4096 max lengths elsewhere in the ascii output functions. Perhaps this is a good opportunity to change those fixed buffers to allocated ones.
It choaks the shell with a single very long line starting with
tpart.bin: N = 11 <-0.290195733309/1.01857888699> <-0.216871097684/1.05669128895>
the debugger stops at gmt_io#L1156 (VS complains on vfprintf
Yep, we hvae buffer[GMT_BUFSIZ] for per-record formatting etc and of course things get truncated. Two options:
- Upgrade gmt_io.c and probably gmt_api.c to use allocations that can extend as longer records appear.
- Use GMT_INITIAL_MEM_ROW_ALLOC instead for those buffers. 8 times larger than now. With current system, random data, and default format I can get ~192 columns output with ASCII. Any larger exceeds 4096. Of course a function of data precision and chosen FLOAT format.
You have been adamant in the past that we not allocate more space than we need but occasionally someone needs more than 200 columns and in 2023 we all have more memory!
First. The mathcol-check
fails because the text being printed is a single line with more than 27 k characters long.
To address this, the best would be option 1 ofc but I understand that it might be just to much work. The problem with increasing GMT_BUFSIZ is that we use it in many places not related with ascii table imports. My reticence to increase it much was because I remember to read (discussions long ago in the GDAL list) that stack memory is limited and large static arrays may consume a lot of it. Honestly, I don’t know if this is true or if the situation has improved a lot with time.
Paul & Joaquim,
While I spent my weekend pulling weeds, mowing lawns, and attending a relative’s high school graduation party, you guys have been exploring the options for my quandary.
In some of the ICESat-2 upper level products, there are parameters that wind-up having huge record lengths when dumped by h5dump. This is what is behind the situation I’ve been facing.
I usually use GMT’s convert command, but for two-dimensional arrays, I haven’t quite struck on the proper usage of the command to successfully stage the data. Likely wouldn’t make a lot of difference when it comes to working with these long record lengths.
Thanks for your weekend efforts! I’m monitoring the thread.
John