In order to work with decimal data types, I have to do this with variable initialization:
decimal aValue = 50.0M;
What does the M part stand for?
It means it’s a decimal literal, as others have said. However, the origins are probably not those suggested elsewhere in this answer. From the C# Annotated Standard (the ECMA version, not the MS version):
decimalsuffix is M/m since D/d
was already taken by
Although it has been suggested that M
stands for money, Peter Golde recalls
that M was chosen simply as the next
best letter in
A similar annotation mentions that early versions of C# included “Y” and “S” for
short literals respectively. They were dropped on the grounds of not being useful very often.
From C# specifications:
var f = 0f; // float var d = 0d; // double var m = 0m; // decimal (money) var u = 0u; // unsigned int var l = 0l; // long var ul = 0ul; // unsigned long
Note that you can use an uppercase or lowercase notation.
M refers to the first non-ambiguous character in “decimal”. If you don’t add it the number will be treated as a double.
D is double.
A real literal suffixed by M or m is of type decimal (money). For example, the literals 1m, 1.5m, 1e10m, and 123.456M are all of type decimal. This literal is converted to a decimal value by taking the exact value, and, if necessary, rounding to the nearest representable value using banker’s rounding. Any scale apparent in the literal is preserved unless the value is rounded or the value is zero (in which latter case the sign and scale will be 0). Hence, the literal 2.900m will be parsed to form the decimal with sign 0, coefficient 2900, and scale 3.
Well, i guess M represent the mantissa. Decimal can be used to save money, but it doesn’t mean, decimal only used for money.