For example, consider the sample input and expected output below.
using System;
namespace SampleProgram
{
class MainProgram
{
public static void Main(string[] args)
{
decimal[] decimalNumbers = { 1.0M, 1.01M, 1.0010M, 0.00M, 1.0050M};
foreach (decimal decimalNumber in decimalNumbers)
{
Console.WriteLine("Original Decimal Number = {0}, Without Zeros = {1}",
decimalNumber, decimalNumber.ToString("0.####"));
}
}
}
}
If I enter 1.1234567890, we get 1.1235, instead of 1.123456789. This is because, the number of # symbols in the ToString("0.####") is only 4. The maximum decimal places, you want in the output is represented by the # symbol. In general a C# decimal datatype can have 29 total number of digits. This includes both integral and decimal part of a decimal.
Input | Output |
1.0 | 1 |
1.01 | 1.01 |
1.0010 | 1.001 |
0.00 | 0 |
1.0050 | 1.005 |
using System;
namespace SampleProgram
{
class MainProgram
{
public static void Main(string[] args)
{
decimal[] decimalNumbers = { 1.0M, 1.01M, 1.0010M, 0.00M, 1.0050M};
foreach (decimal decimalNumber in decimalNumbers)
{
Console.WriteLine("Original Decimal Number = {0}, Without Zeros = {1}",
decimalNumber, decimalNumber.ToString("0.####"));
}
}
}
}
If I enter 1.1234567890, we get 1.1235, instead of 1.123456789. This is because, the number of # symbols in the ToString("0.####") is only 4. The maximum decimal places, you want in the output is represented by the # symbol. In general a C# decimal datatype can have 29 total number of digits. This includes both integral and decimal part of a decimal.