# [Solved] What does the M stand for in C# Decimal literal notation?

In order to work with decimal data types, I have to do this with variable initialization:

``````decimal aValue = 50.0M;
``````

What does the M part stand for?

Solution #1:

It means it’s a decimal literal, as others have said. However, the origins are probably not those suggested elsewhere in this answer. From the C# Annotated Standard (the ECMA version, not the MS version):

The `decimal` suffix is M/m since D/d
was already taken by `double`.
Although it has been suggested that M
stands for money, Peter Golde recalls
that M was chosen simply as the next
best letter in `decimal`.

A similar annotation mentions that early versions of C# included “Y” and “S” for `byte` and `short` literals respectively. They were dropped on the grounds of not being useful very often.

Solution #2:

From C# specifications:

``````var f = 0f; // float
var d = 0d; // double
var m = 0m; // decimal (money)
var u = 0u; // unsigned int
var l = 0l; // long
var ul = 0ul; // unsigned long
``````

Note that you can use an uppercase or lowercase notation.

Solution #3:

M refers to the first non-ambiguous character in “decimal”. If you don’t add it the number will be treated as a double.

D is double.

Solution #4:

A real literal suffixed by M or m is of type decimal (money). For example, the literals 1m, 1.5m, 1e10m, and 123.456M are all of type decimal. This literal is converted to a decimal value by taking the exact value, and, if necessary, rounding to the nearest representable value using banker’s rounding. Any scale apparent in the literal is preserved unless the value is rounded or the value is zero (in which latter case the sign and scale will be 0). Hence, the literal 2.900m will be parsed to form the decimal with sign 0, coefficient 2900, and scale 3.