Tutorial for decimals in GMT?

In the docs I often see things like %6.4f, %.16lg, %.12g, etc. I know that it is to control the decimals of numbers, but I don’t understand what does it means. I usually do try and error until I get the decimals that I want. Where I can read about that syntax? Is that C language printf? Could you share any good link/tutorial? Thanks in advance.

Yes, this is like C’s printf (or bash’s printf as the examples below). You could check out https://linux.die.net/man/3/printf. I’ve also struggled with this. I made a cheat sheet for myself, maybe it is of use.

For your specific case, controlling decimals, it’s like this: %[width].[no of decimals][type]

$ printf "%g \n" "3.14159265358979323"
printf "%10.3g \n" "3.14159265358979323"
$ printf "%10.8g \n" "3.14159265358979323"
$ printf "%.16g \n" "3.14159265358979323"

I’m confused though; why does printf "%10.3g \n" "3.14159265358979323" print two decimals and not three? Maybe some divine C programmer can answer. EDIT: changing the type from g to f gives the expected result.

| printf    | %[-][n][.][n][h,l]cc                                 |
| %         | conversion specification                             |
| -         | specifies left adjustment                            |
| n         | specifies the minimum field width                    |
| .         | separates the field width from the precision         |
| n         | max number of chars (string)                         |
|           | OR number of digits after decimal point (float)      |
|           | OR min. number of digits (int)                       |
| h         | print int as short                                   |
| l         | (letter ell) print int as long                       |
| cc        | conversion character                                 |
|           |                                                      |
| Character | Argument type; Printed As                            |
| d,i       | int (decimal number)                                 |
| o         | int; unsigned octal number                           |
| x,X       | int; unsigned hexadecimal number                     |
| u         | int; unsigned decimal number                         |
| c         | int; single character                                |
| s         | char *; (tekststreng)                                |
| f         | double; [-] m.dddddd , d = precision (6).            |
| e,E       | double; [-] m.dddddd e+/- xx or [-] m.dddddd E+/- xx |
| g,G       | double; use %e or %E if the exponent is less than -4 |
| p         | void *; pointer                                      |
| %         | no argument is converted; print a %                  |
1 Like

See the Matlab manual for fprintf

The %g specifier is tricky (but often the most usefull one) and many manuals about it are wrong. See this SO question/answer. The best thing on %g is that it removes trailing zeros.

julia> @printf("%.15g", 0.3333)
julia> @printf("%.15f", 0.3333)
1 Like

Maybe it would be a nice addition to the man-pages to include a small paragraph on how to specify these printf templates. They are mentioned two times in https://docs.generic-mapping-tools.org/6.1/gmt.conf.html#format-parameters:


Format (C language printf syntax) to be used when plotting double precision floating point numbers along plot frames and contours. For geographic coordinates, see [FORMAT_GEO_MAP](https://docs.generic-mapping-tools.org/6.1/gmt.conf.html#term-FORMAT_GEO_MAP). [%.12lg].


Format (C language printf syntax) to be used when printing double precision floating point numbers to output files. For geographic coordinates, see [FORMAT_GEO_OUT](https://docs.generic-mapping-tools.org/6.1/gmt.conf.html#term-FORMAT_GEO_OUT). [%.12lg]. To give some columns a separate format, supply one or more comma-separated *cols*:*format* specifications, where *cols* can be specific columns (e.g., 5 for 6th since 0 is the first) or a range of columns (e.g., 3-7). The last specification without column information will override the format for all other columns. Alternatively, you can list N space-separated formats and these apply to the first N columns.

Yes, that would be a nice addition. Would you mind making a pull request. Add it to FORMAT_FLOAT_OUT and just refer to FORMAT_FLOAT_OUT from FORMAT_FLOAT_MAP.

1 Like

I thought the whole point of using printf-type of output was that you always know what you get. With %g, this does not seem to be the case; what you get (length and precision) depends of the input number.

I read through the SO post you shared. I feel a bit wiser, but still not quite ‘arrived’. Based on purely empirical example:

$ printf "%.5g\n" "2.33335234"
$ printf "%.5f\n" "2.33335234"

Explanation based on my primitive, outdated neural network:

  • %.ng - round off the the number based on the nth-precision number and give (n-1) decimals.
  • %.nf - print n decimals - what I specified

What is the rationale of choosing %g over %f as default? Remove traliing zeros, as Joaquim mentioned? And why include l (as in %.12lg)? Based on my interpretation this causes int to be printed as long, but we’re not dealing with int’s here, no? From https://en.wikipedia.org/wiki/Printf_format_string:

For floating point types, this [l] is ignored. float arguments are always promoted to double when used in a varargs call.

Again, sorry, no

>> fprintf('%.5g\n', 10.3333333)
>> fprintf('%.3g\n', 1/30)
>> fprintf('%.5g\n', 10.3333333)
>> fprintf('%.3g\n', 1/30)
>> fprintf('%.4g\n', 12.001)
>> fprintf('%.5g\n', 12.001)

Note that the Matlab page says %g prints the significant digits not the number of decimal digits. Except that it doesn’t print the trailing zeros, even when they are significant (the two 0s in 12.00)

I use %g specially when I want to convert numbers into strings (for example when building -R option). Otherwise use what you feel it’s better for your case.

Thanks Joaquim, for enlightening me.

The Decimal, Double, and Float variable types are different in the way that they store the values. Precision is the main difference where float is a single precision (32 bit) floating point data type, double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.

  • Float - 32 bit (7 digits)
  • Double - 64 bit (15-16 digits)
  • Decimal - 128 bit (28-29 significant digits)

The main difference is Float and Double are binary floating point types and a Decimal will store the value as a floating decimal point type. So Decimals have much higher precision and are usually used within monetary (financial) applications that require a high degree of accuracy. But in performance wise Decimals are slower than double and float types.

Thanks @bradmacer!