> "Modern" languages try to avoid exceptions by using sum types and pattern matching plus lots of sugar to make this bearable. I personally dislike both exceptions and its emulation via sum types. ... I personally prefer to make the error state part of the objects: Streams can be in an error state, floats can be NaN and integers should be low(int) if they are invalid.
Special values like NaN are half-assed sum types. The latter give you compiler guarantees.
I’d like to see their argument for it. I see no help in pushing NaN as a number through a code path corrupting all operations it is part of, and the same is true for the others.
The reason NaN exists is for performance AFAIK. i.e. on a GPU you can't really have exceptions. You don't want to be constantly checking "did this individual floating-point op produce an error?" It's easier and faster for the individual floating point unit to flag the output as a NaN. Obviously NaNs long predate GPUs, but floating-point support was also hardware accelerated in a variety of ways for a long time.
That being said, I agree that the way NaNs propagate is messy. You can end up only finding out that there was an error much later during the program's execution and then it can be tricky to find out where it came from.
There is no direct argument/guidence that I saw for "when to use them", but masked arrays { https://numpy.org/doc/stable/reference/maskedarray.html } (an alternative to sentinels in array processing sub-languages) have been in NumPy (following its antecedents) from its start. I'm guessing you could do a code-search for its imports and find arguments pro & con in various places surrounding that.
From memory, I have heard "infecting all downstream" as both "a feature" and "a problem". Experience with numpy programs did lead to sentinels in the https://github.com/c-blake/nio Nim package, though.
Another way to try to investigate popularity here is to see how much code uses signaling NaN vs. quiet NaN and/or arguments pro/con those things / floating point exceptions in general.
I imagine all of it comes down to questions of how locally can/should code be forced to confront problems, much like arguments about try/except/catch kinds of exception handling systems vs. other alternatives. In the age of SIMD there can be performance angles to these questions and essentially "batching factors" for error handling that relate to all the other batching factors going on.
Today's version of this wiki page also includes a discussion of Integer Nan: https://en.wikipedia.org/wiki/NaN . It notes that the R language uses the minimal signed value (i.e. 0x80000000) of integers for NA.
To be clear, I am not taking some specific position, but I think all these topics inform answers to your question. I think it's something with trade-offs that people have a tendency to over-simplify based on a limited view.
>To be clear, I am not taking some specific position, but I think all these topics inform answers to your question. I think it's something with trade-offs that people have a tendency to over-simplify based on a limited view.
That's fair, I wasn't dimsissing the practice but rather just commenting that it's a shame the author didn't clarify their preference.
I don't think the popularity angle is a good proxy for usefulness/correction of the practice. Many factors can influence popularity.
Performance is a very fair point, I don't know enough to understand the details but I could see it being a strong argument. It is counter intuitive to move forward with calculations known to be useless, but maybe the cost of checking all calculations for validity is larger than the savings of skipping early the invalid ones.
There is a catch though. Numpy and R are very oriented to calculation pipelines, which is a very different usecase to general programming, where the side effects of undetected 'corrupt' values can be more serious.
The conversation around Nim for the past 20 years has been rather fragmented - IRC channels, Discord channels (dozens, I think), later the Forum, Github issue threads, pull request comment threads, RFCs, etc. Araq has a tendency to defend his ideas in one venue (sometimes quite cogently) and leave it to questioners to dig up where those trade-off conversations might be. I've disliked the fractured nature of the conversation for the 10 years I've known about it, but assigned it to a kind of "kids these days, whachagonnado" status. Many conversations (and life!) are just like that - you kind of have to "meet people where they are".
Anyway, this topic of "error handling scoping/locality" may be the single most cross-cutting topic across CPUs, PLangs, Databases, and operating systems (I would bin Numpy/R under Plangs+Databases as they are kind of "data languages"). Consequently, opinions can be very strong (often having this sense of "Everything hinges on this!") in all directions, but rarely take a "complete" view.
If you are interested in "fundamental, not just popularity" discussions, and it sounds like you are, I feel like the database community discussions are probably the most "refined/complete" in terms of trade-offs, but that could simply be my personal exposure, and DB people tend to ignore CPU SIMD because it's such a "recent" innovation (hahaha, Seymore Cray was doing it in the 1980s for the Cray-3 Vector SuperComputer). Anyway, just trying to help. That link to the DB Null page I gave is probably a good starting point.
The compiler can still enforce checks, such as with nil checks for pointers.
In my opinion it’s overall cleaner if the compiler handles enforcing it when it can. Something like “ensure variable is initialized” can just be another compiler check.
Combined with an effects system that lets you control which errors to enforce checking on or not. Nim has a nice `forbids: IOException` that lets users do that.
> The compiler can still enforce checks, such as with nil checks for pointers.
Only sometimes, when the compiler happens to be able to understand the code fully enough. With sum types it can be enforced all the time, and bypassed when the programmer explicitly wants it to be.
There's nothing preventing this for floats and ints in principle. e.g. the machine representation could be float, but the type in the eyes of the compiler could be `float | nan` until you check it for nan (at which point it becomes `float`). Then any operation which can return nan would return `float | nan` instead.
tbh this system (assuming it works that way) would be more strict at compile-time than the vast majority of languages.
There is a direct connection, you just don't have to bother with typing it. Same as type inference, the types are still there, you just don't have to specify them. If you have a collision in name and declaration then the compiler requires you to specify which version you wanted. And with language inspection tools (like LSP or other editor integration) you can easily figure out where something comes from if you need to. Most of the time though I find it fairly obvious when programming in Nim where something comes from, in your example it's trivial to see that the error code comes from the errorcodes module.
Oh, and as someone else pointed out you can also just `from std/errorcodes import nil` and then you _have_ to specify where things come from.
When I was learning Nim and learned how imports work and that things stringify with a $ function that comes along with their types (since everything is splat imported) and $ is massively overloaded I went "oh that all makes sense and works together". The LSP can help figure it out. It still feels like it's in bad taste.
It's similar to how Ruby (which also has "unstructured" imports) and Python are similar in a lot of ways yet make many opposite choices. I think a lot of Ruby's choices are "wrong" even though they fit together within the language.
Nim imports are great. I would hate to qualify everything. It feels so bureaucratic when going back to other languages. They never cause me issues and largely transparent. Best feature.
I personally prefer to make the error state part of the objects: Streams can be in an error state, floats can be NaN and integers should be low(int) if they are invalid (low(int) is a pointless value anyway as it has no positive equivalent).
It's fine to pick sentinel values for errors in context, but describing 0x80000000 as "pointless" in general with such a weak justification doesn't inspire confidence.
Without the low int the even/odd theorem falls apart for wrap around
I've definitely seen algorithms that rely upon that.
I would agree, whether error values are in or out of band is pretty context dependent
such as whether you answered a homework question wrong, or your dog ate it.
One is not a condition that can be graded.
that all integers are either even or odd, and that for an even integer that integer + 1 and - 1 are odd and vice versa for odd numbers. That the negative numbers have an additional digit from the positive numbers ensures that low(integer) and high(integer) have different parity. So when you wrap around with overflow or underflow you continue to transition from an even to odd, or odd to even.
Presumably since this language isn't C they can define it however they want to, for instance in rust std::i32::MIN.wrapping_sub(1) is a perfectly valid number.
I agree about sentinel values. Just return an error value.
I think the fib example is actually cool though. Integers are not the only possible domain. Everything that supports <=, +, and - is. Could be int, float, a vector/matrix, or even some weird custom type (providing that Nim has operator overloading, which it seems to).
May not make much sense to use anything other than int in this case, but it is just a toy example. I like the idea in general.
Well, I agree about Fibable, it’s fine. It’s the actual fib function that doesn’t work for me. T can only be integer, because the base case returns 1 and the function returns T. Therefore it doesn’t work for all Fibables, just for integers.
In this case, it compiles & runs fine with floats (if you just delete the type constraint "Fibable") because the string "1" can be implicitly converted into float(1) { or 1.0 or 1f64 or float64(1) or 1'f64 or ..? }. You can think of the "1" and "2" as having an implicit "T(1)", "T(2)" -- which would also resolve your "doesn't work for me" if you prefer the explicitness. You don't have to trust me, either. You can try it with `echo fib(7.0)`.
Nim is Choice in many dimensions that other PLang's are insistently monosyllabic/stylistic about - gc or not or what kind, many kinds of spelling, new operator vs. overloaded old one, etc., etc., etc. Some people actually dislike choice because it allows others to choose differently and the ensuing entropy creates cognitive dissonance. Code formatters are maybe a good example of this? They may not phrase opposition as being "against choice" as explicitly as I am framing it, but I think the "My choices only, please!" sentiment is in there if they are self-aware.
But given the definition of Fibable, it could be anything that supports + and - operators. That could be broader than numbers. You could define it for sets for example. How do you add the number 1 to the set of strings containing (“dog”, “cat”, and “bear”)? So I suppose I do have a complaint about Fibable, which is that it’s underconstrained.
Granted, I don’t know nim. Maybe you can’t define + and - operators for non numbers?
Araq was probably trying to keep `Fibable` short for the point he was trying to make. So, your qualm might more be with his example than anything else.
You could add a `SomeNumber` predicate to the `concept` to address that concern. `SomeNumber` is a built-in typeclass (well, in `system.nim` anyway, but there are ways to use the Nim compiler without that or do a `from system import nil` or etc.).
Unmentioned in the article is a very rare compiler/PLang superpower (available at least in Nim 1, Nim 2) - `compiles`. So, the below will print out two lines - "2\n1\n":
when compiles SomeNumber "hi": echo 1 else: echo 2
when compiles SomeNumber 1.0: echo 1 else: echo 2
Last I knew "concept refinement" for new-style concepts was still a work in progress. Anyway, I'm not sure what is the most elegant way to incorporate this extra constraint, but I think it's a mistake to think it is unincorporatable.
To address your question about '+', you can define it for non-SomeNumber, but you can also define many new operators like `.+.` or `>>>` or whatever. So, it's up to your choice/judgement if the situation calls for `+` vs something else.
You're completely missing the point of this casual example in a blog post ... as evidenced by the fact that you omitted the type definition that preceded it, that is the whole point of the example. That it's not the best possible example is irrelevant. What is relevant is that the compiler can type check the code at the point of definition, not just at the point of instantiation.
And FWIW there are many possible types for T, as small integer constants are compatible with many types. And because of the "proc `<=`(a, b: Self): bool" in the concept definition of Fibable, the compiler knows that "2" is a constant of type T ... so any type that has a conversion proc for literals (remember that Nim has extensive compile-time metaprogramming features) can produce a value of its type given "2".
From my interaction with the Nim community, I came to the conclusion that nim could be more popular if its founder devolved decision making to scale up the community. I think he likes it the way it is; small, but his. He is Torvaldsesque in his social interactions.
I feel the same way - as I suspect a lot of people here do. Nim posts are always upvoted and usually people say nice things about the language in the comments.. but there are few who claim to actually -use- the language for more than a small private project, if even that.
The only way to really test out a programming language is by trying it out or reading how someone else approached a problem that you're interested in/know about.
There are over 2200 nimble packages now. Maybe not an eye-popping number, but there's still a good chance that somewhere in the json at https://github.com/nim-lang/packages you will find something interesting. There is also RosettaCode.org which has a lot of Nim example code.
This, of course, does not speak to the main point of this subthread about the founder but just to some "side ideas".
I worked in nim for a little bit and it truly has a lot of potential but ultimately abandoned it for the same reason. It's never going to grow beyond the founder's playground.
> floats can be NaN and integers should be low(int) if they are invalid (low(int) is a pointless value anyway as it has no positive equivalent).
I have long thought that we need a NaI (not an integer) value for our signed ints. Ideally, the CPU would have overflow-aware instructions similar to floats that return this value on overflow and cost the same as wrapping addition/multiplication/etc.
From an implementation point of view, it would be similar to NaN; a designated sentinel value that all the arithmetic operations are made aware of and have special rules around producing and consuming.
>WCET ("worst case execution time") is an important consideration: Operations should take a fixed amount of time and the produced machine code should be predictable.
Good luck. Give the avionics guys a call if you solve this at the language level.
> "Modern" languages try to avoid exceptions by using sum types and pattern matching plus lots of sugar to make this bearable. I personally dislike both exceptions and its emulation via sum types. ... I personally prefer to make the error state part of the objects: Streams can be in an error state, floats can be NaN and integers should be low(int) if they are invalid.
Special values like NaN are half-assed sum types. The latter give you compiler guarantees.
Not a defense of the poison value approach, but in this thread Araq (Nim's principal author) lays out his defense for exceptions.
https://forum.nim-lang.org/t/9596#63118
I’d like to see their argument for it. I see no help in pushing NaN as a number through a code path corrupting all operations it is part of, and the same is true for the others.
The reason NaN exists is for performance AFAIK. i.e. on a GPU you can't really have exceptions. You don't want to be constantly checking "did this individual floating-point op produce an error?" It's easier and faster for the individual floating point unit to flag the output as a NaN. Obviously NaNs long predate GPUs, but floating-point support was also hardware accelerated in a variety of ways for a long time.
That being said, I agree that the way NaNs propagate is messy. You can end up only finding out that there was an error much later during the program's execution and then it can be tricky to find out where it came from.
There is no direct argument/guidence that I saw for "when to use them", but masked arrays { https://numpy.org/doc/stable/reference/maskedarray.html } (an alternative to sentinels in array processing sub-languages) have been in NumPy (following its antecedents) from its start. I'm guessing you could do a code-search for its imports and find arguments pro & con in various places surrounding that.
From memory, I have heard "infecting all downstream" as both "a feature" and "a problem". Experience with numpy programs did lead to sentinels in the https://github.com/c-blake/nio Nim package, though.
Another way to try to investigate popularity here is to see how much code uses signaling NaN vs. quiet NaN and/or arguments pro/con those things / floating point exceptions in general.
I imagine all of it comes down to questions of how locally can/should code be forced to confront problems, much like arguments about try/except/catch kinds of exception handling systems vs. other alternatives. In the age of SIMD there can be performance angles to these questions and essentially "batching factors" for error handling that relate to all the other batching factors going on.
Today's version of this wiki page also includes a discussion of Integer Nan: https://en.wikipedia.org/wiki/NaN . It notes that the R language uses the minimal signed value (i.e. 0x80000000) of integers for NA.
There is also the whole database NULL question: https://en.wikipedia.org/wiki/Null_(SQL)
To be clear, I am not taking some specific position, but I think all these topics inform answers to your question. I think it's something with trade-offs that people have a tendency to over-simplify based on a limited view.
>To be clear, I am not taking some specific position, but I think all these topics inform answers to your question. I think it's something with trade-offs that people have a tendency to over-simplify based on a limited view.
That's fair, I wasn't dimsissing the practice but rather just commenting that it's a shame the author didn't clarify their preference.
I don't think the popularity angle is a good proxy for usefulness/correction of the practice. Many factors can influence popularity.
Performance is a very fair point, I don't know enough to understand the details but I could see it being a strong argument. It is counter intuitive to move forward with calculations known to be useless, but maybe the cost of checking all calculations for validity is larger than the savings of skipping early the invalid ones.
There is a catch though. Numpy and R are very oriented to calculation pipelines, which is a very different usecase to general programming, where the side effects of undetected 'corrupt' values can be more serious.
The conversation around Nim for the past 20 years has been rather fragmented - IRC channels, Discord channels (dozens, I think), later the Forum, Github issue threads, pull request comment threads, RFCs, etc. Araq has a tendency to defend his ideas in one venue (sometimes quite cogently) and leave it to questioners to dig up where those trade-off conversations might be. I've disliked the fractured nature of the conversation for the 10 years I've known about it, but assigned it to a kind of "kids these days, whachagonnado" status. Many conversations (and life!) are just like that - you kind of have to "meet people where they are".
Anyway, this topic of "error handling scoping/locality" may be the single most cross-cutting topic across CPUs, PLangs, Databases, and operating systems (I would bin Numpy/R under Plangs+Databases as they are kind of "data languages"). Consequently, opinions can be very strong (often having this sense of "Everything hinges on this!") in all directions, but rarely take a "complete" view.
If you are interested in "fundamental, not just popularity" discussions, and it sounds like you are, I feel like the database community discussions are probably the most "refined/complete" in terms of trade-offs, but that could simply be my personal exposure, and DB people tend to ignore CPU SIMD because it's such a "recent" innovation (hahaha, Seymore Cray was doing it in the 1980s for the Cray-3 Vector SuperComputer). Anyway, just trying to help. That link to the DB Null page I gave is probably a good starting point.
Yeah, I'm not sure I've ever seen NaN called or as an example to be emulated before, rather than something people complain about.
Holy shit, I'd love to see NaN as a proper sum type. That's the way to do it. That would fix everything.
The compiler can still enforce checks, such as with nil checks for pointers.
In my opinion it’s overall cleaner if the compiler handles enforcing it when it can. Something like “ensure variable is initialized” can just be another compiler check.
Combined with an effects system that lets you control which errors to enforce checking on or not. Nim has a nice `forbids: IOException` that lets users do that.
Both of these things respectively are just pattern matches and monads, just not user-definable ones.
> The compiler can still enforce checks, such as with nil checks for pointers.
Only sometimes, when the compiler happens to be able to understand the code fully enough. With sum types it can be enforced all the time, and bypassed when the programmer explicitly wants it to be.
There's nothing preventing this for floats and ints in principle. e.g. the machine representation could be float, but the type in the eyes of the compiler could be `float | nan` until you check it for nan (at which point it becomes `float`). Then any operation which can return nan would return `float | nan` instead.
tbh this system (assuming it works that way) would be more strict at compile-time than the vast majority of languages.
The biggest thing I still don’t like about Nim is its imports:
I can’t stand that there’s no direct connection between the thing you import and the names that wind up in your namespace.There is a direct connection, you just don't have to bother with typing it. Same as type inference, the types are still there, you just don't have to specify them. If you have a collision in name and declaration then the compiler requires you to specify which version you wanted. And with language inspection tools (like LSP or other editor integration) you can easily figure out where something comes from if you need to. Most of the time though I find it fairly obvious when programming in Nim where something comes from, in your example it's trivial to see that the error code comes from the errorcodes module.
Oh, and as someone else pointed out you can also just `from std/errorcodes import nil` and then you _have_ to specify where things come from.
When I was learning Nim and learned how imports work and that things stringify with a $ function that comes along with their types (since everything is splat imported) and $ is massively overloaded I went "oh that all makes sense and works together". The LSP can help figure it out. It still feels like it's in bad taste.
It's similar to how Ruby (which also has "unstructured" imports) and Python are similar in a lot of ways yet make many opposite choices. I think a lot of Ruby's choices are "wrong" even though they fit together within the language.
Nim imports are great. I would hate to qualify everything. It feels so bureaucratic when going back to other languages. They never cause me issues and largely transparent. Best feature.
It needs to be this way so that UFCS works properly. Imagine if instead of "a,b".split(','), you had to write "a,b".(strutils.split)(',').
ok I do not understand.
What is preventing this import std/errorcodes
from allowing me to use: raise errorcodes.RangeError instead of what Nim has?
or even why not even "import std/ErrorCodes" and having the plural in ErrorCodes.RangeError I wouldn't mind
Nothing, and it fact this works. To move to an example which actually compiles:
All of these ways of identifying the `fcNormal` enum value works, with varying levels of specificity.If instead you do `from math import nil` only the latter two work.
100% my beef with it. same style as c++ where you never know where something comes from, when clangd starts throwing one of its fits
[delayed]
You are free to import nil and type the fully qualified name.
There are many things to like about Nim, but it does benefit from adherence to a style guide more than most languages.
Big "college freshman" energy in this take:
It's fine to pick sentinel values for errors in context, but describing 0x80000000 as "pointless" in general with such a weak justification doesn't inspire confidence.I had the impression, the creator of Nim isn't very fond of academic( solution)s.
Without the low int the even/odd theorem falls apart for wrap around I've definitely seen algorithms that rely upon that.
I would agree, whether error values are in or out of band is pretty context dependent such as whether you answered a homework question wrong, or your dog ate it. One is not a condition that can be graded.
What is the "even/odd theorem" ?
that all integers are either even or odd, and that for an even integer that integer + 1 and - 1 are odd and vice versa for odd numbers. That the negative numbers have an additional digit from the positive numbers ensures that low(integer) and high(integer) have different parity. So when you wrap around with overflow or underflow you continue to transition from an even to odd, or odd to even.
If you need wraparound, you should not use signed integers anyway, as that leads to undefined behavior.
Presumably since this language isn't C they can define it however they want to, for instance in rust std::i32::MIN.wrapping_sub(1) is a perfectly valid number.
Nim (the original one, not Nimony) compiles to C, so making basic types work differently from C would involve major performance costs.
Signed overflow being UB (while unsigned is defined to wrap) is a quirk of C and C++ specifically, not some fundamental property of computing.
Nim (the original one, not Nimony) compiles to C, so making basic types work differently from C would involve major performance costs.
Presumably unsigned want to return errors too?
Edit: I guess they could get rid of a few numbers... Anyhow it isn't a philosophy that is going to get me to consider nimony for anything.
> making basic types work differently from C would involve major performance costs.
Not if you compile with optimizations on. This C code:
Compiles to this x86-64 assembly (with clang -O2): Which, for those who aren't familiar with x86 assembly, is just the normal instruction for adding two numbers with wrapping semantics.Specifically, C comes form a world where allowing for machines that didn't use 2's compliment (or 8 bit bytes) was an active concern.
Interestingly, C23 and C++20 standardized 2's complement representation for signed integers but kept UB on signed overflow.
I have been burned by sentinel values every time. Give me sum types instead. And while I’m piling on, this example makes no sense to me:
Integer is the only possible type for T in this implementation, so what was the point of defining Fibable?I agree about sentinel values. Just return an error value.
I think the fib example is actually cool though. Integers are not the only possible domain. Everything that supports <=, +, and - is. Could be int, float, a vector/matrix, or even some weird custom type (providing that Nim has operator overloading, which it seems to).
May not make much sense to use anything other than int in this case, but it is just a toy example. I like the idea in general.
Well, I agree about Fibable, it’s fine. It’s the actual fib function that doesn’t work for me. T can only be integer, because the base case returns 1 and the function returns T. Therefore it doesn’t work for all Fibables, just for integers.
In this case, it compiles & runs fine with floats (if you just delete the type constraint "Fibable") because the string "1" can be implicitly converted into float(1) { or 1.0 or 1f64 or float64(1) or 1'f64 or ..? }. You can think of the "1" and "2" as having an implicit "T(1)", "T(2)" -- which would also resolve your "doesn't work for me" if you prefer the explicitness. You don't have to trust me, either. You can try it with `echo fib(7.0)`.
Nim is Choice in many dimensions that other PLang's are insistently monosyllabic/stylistic about - gc or not or what kind, many kinds of spelling, new operator vs. overloaded old one, etc., etc., etc. Some people actually dislike choice because it allows others to choose differently and the ensuing entropy creates cognitive dissonance. Code formatters are maybe a good example of this? They may not phrase opposition as being "against choice" as explicitly as I am framing it, but I think the "My choices only, please!" sentiment is in there if they are self-aware.
But given the definition of Fibable, it could be anything that supports + and - operators. That could be broader than numbers. You could define it for sets for example. How do you add the number 1 to the set of strings containing (“dog”, “cat”, and “bear”)? So I suppose I do have a complaint about Fibable, which is that it’s underconstrained.
Granted, I don’t know nim. Maybe you can’t define + and - operators for non numbers?
Araq was probably trying to keep `Fibable` short for the point he was trying to make. So, your qualm might more be with his example than anything else.
You could add a `SomeNumber` predicate to the `concept` to address that concern. `SomeNumber` is a built-in typeclass (well, in `system.nim` anyway, but there are ways to use the Nim compiler without that or do a `from system import nil` or etc.).
Unmentioned in the article is a very rare compiler/PLang superpower (available at least in Nim 1, Nim 2) - `compiles`. So, the below will print out two lines - "2\n1\n":
Last I knew "concept refinement" for new-style concepts was still a work in progress. Anyway, I'm not sure what is the most elegant way to incorporate this extra constraint, but I think it's a mistake to think it is unincorporatable.To address your question about '+', you can define it for non-SomeNumber, but you can also define many new operators like `.+.` or `>>>` or whatever. So, it's up to your choice/judgement if the situation calls for `+` vs something else.
I see, I misunderstood your complaint then.
However, the base case being 1 does not preclude other types than integers, as cb321 pointed out.
You're completely missing the point of this casual example in a blog post ... as evidenced by the fact that you omitted the type definition that preceded it, that is the whole point of the example. That it's not the best possible example is irrelevant. What is relevant is that the compiler can type check the code at the point of definition, not just at the point of instantiation.
And FWIW there are many possible types for T, as small integer constants are compatible with many types. And because of the "proc `<=`(a, b: Self): bool" in the concept definition of Fibable, the compiler knows that "2" is a constant of type T ... so any type that has a conversion proc for literals (remember that Nim has extensive compile-time metaprogramming features) can produce a value of its type given "2".
There can be a lot of different integers, int16, int32 ... and unsigned variants. Even huge BigNum integers of any lengths.
From my interaction with the Nim community, I came to the conclusion that nim could be more popular if its founder devolved decision making to scale up the community. I think he likes it the way it is; small, but his. He is Torvaldsesque in his social interactions.
I feel the same way - as I suspect a lot of people here do. Nim posts are always upvoted and usually people say nice things about the language in the comments.. but there are few who claim to actually -use- the language for more than a small private project, if even that.
The only way to really test out a programming language is by trying it out or reading how someone else approached a problem that you're interested in/know about.
There are over 2200 nimble packages now. Maybe not an eye-popping number, but there's still a good chance that somewhere in the json at https://github.com/nim-lang/packages you will find something interesting. There is also RosettaCode.org which has a lot of Nim example code.
This, of course, does not speak to the main point of this subthread about the founder but just to some "side ideas".
I worked in nim for a little bit and it truly has a lot of potential but ultimately abandoned it for the same reason. It's never going to grow beyond the founder's playground.
Please no. Design by committee would lead to another C++.
Languages with design by committee are a plenty, including all mainstream ones, not a single one is still being developed by a single person.
The second or third most popular language of all time? God forbid lol
Popular does not mean good. Tobacco smoking is also popular.
Do you think this is clever? For a metaphor to be relevant to a discussion it has to be fitting, not just a dunk.
It’s not a metaphor. I was giving a counterexample to your implied claim that popularity is an indicator of quality.
That wasn't an implied claim because we're not discussing metrics for judging quality.
> floats can be NaN and integers should be low(int) if they are invalid (low(int) is a pointless value anyway as it has no positive equivalent).
I have long thought that we need a NaI (not an integer) value for our signed ints. Ideally, the CPU would have overflow-aware instructions similar to floats that return this value on overflow and cost the same as wrapping addition/multiplication/etc.
From an implementation point of view, it would be similar to NaN; a designated sentinel value that all the arithmetic operations are made aware of and have special rules around producing and consuming.
Does Nimony/Nim 3.0 have pattern matching, or any plans for it?
Yes
Design: https://github.com/nim-lang/RFCs/issues/559
Plan: https://forum.nim-lang.org/t/13357#81170
>WCET ("worst case execution time") is an important consideration: Operations should take a fixed amount of time and the produced machine code should be predictable.
Good luck. Give the avionics guys a call if you solve this at the language level.
> It is not possible to say which exceptions are possible
So repeating the same mistake that Spring made by using runtime exceptions everywhere.
Now you can never know how exactly a function can fail, which means you are flying completely blind.