乐闻世界logo
搜索文章和话题

What is the difference between %g and %f in C?

1个答案

1
  1. %f:This format specifier outputs floating-point numbers in fixed-point format. It displays the number with the specified decimal precision regardless of the value's magnitude. If no precision is specified, it defaults to six decimal places.

Example:

c
double num = 123.456789; printf("%.2f", num); // Output: 123.46
  1. %g:This format specifier automatically selects the most compact representation between %f and %e (scientific notation) based on the value's magnitude and precision. It strips trailing zeros and unnecessary decimal points, typically used for outputting a compact representation. When the value is very large or very small, %g automatically switches to scientific notation.

Example:

c
double num = 123.456789; printf("%g", num); // Output: 123.457 (default precision) double num2 = 0.0000123456789; printf("%g", num2); // Output: 1.23457e-05

In summary, %f always outputs in standard decimal format, while %g selects the most appropriate format—either standard decimal format (%f) or scientific notation (%e)—to maintain precision while keeping the output compact and readable.

2024年6月29日 12:07 回复

你的答案