Man... Go could have been so good.
Instead it just takes all the problems that exist in programming and pretend they don't exist.
"How big is an array?" "An int big!" "...a signed int?" "Yeah sure why not." "What happens if your array is 2049 MB long on a 32-bit platform?" "*shrug*"
Srsly, there's 3 types of languages as far as I can tell: those that define a specific size for numerical types, those that let those types be any size, and Go+C.
Cause if C does it it has to be a good idea, right?
I mean... Correct me if this is wrong, I am sleepy. But I can't think of any other languages where your most common number type varies in size as the machine word size changes.
If you have programs that talk over a network it's just begging for trouble.
@icefox in my imaginary "C, but better," there would be an explicit difference between "native"-sized ints and explicitly-sized ints. Both are useful in different cases.
@icefox For me it would be more, explicit size for net/disk/other comms and where the precision/size is important, and native everywhere else where I don't care. As long as there's the option. I've considered making a few #defines so I could have this in C already, but the library interoperability would make it confusing.
@impiaaa "This isn't going to be a problem" just means it's not a problem until someone else touches it and does something unexpected, or it means the size invariants are enforced by something other than the compiler (loading a known file format, for example).
@impiaaa Yeah, except you always care, because numerical overflows/underflows are almost always unchecked and almost never what you want.
"Don't care" is i32, or f64, or i64 if you want. Because then you don't *have* to care. It will never act differently from one platform to another, and how it will act is always exactly how you want it to act. You will never get fucked over by it unexpectedly.