Depends on how much you depend on those features, but I'm just speculating. (It's not like I have the capacity to work on V8 or LLVM myself, for example.)agamemnus wrote:It's far faster to code with than many other languages. But still, static typing and weak garbage collection will always be faster than non-static typing and automatic garbage collection... right?anonymous1337 wrote:Dude, v8 is f*ing fast these days :)
http://benchmarksgame.alioth.debian.org ... &lang2=gcc
Considering most people are processing server-side js using v8 + node.js (which then hands execution off to async c++ modules), those benchmarks are not bad at all given everything JavaScript can do.
How much faster than "normal" automatically optimized code, vs code that was specifically structured to avoid cache misses and optimize for a specific system (which a JIT can do, but an optimizing static compiler can only do if you target that system specifically)? Or how about garbage collection vs manual memory management? How good are people at manual memory management, really? (Allocating memory is one of the most expensive operations, and thus it's often done in big batches in highly optimized code by, for example, turning separate arrays into a vectorized single array.)
I'd say that if you're just writing code, not optimizing it at the framework level, you're not getting as many benefits from C or C++ as you might think. That depends on the underlying libraries as well as compiler implementation. Of course, the reference Ruby compiler is still entirely interpreted from what I recall. It's very slow. Scaled horribly for Twitter when it got to that point. Ruby's still an incredibly useful language. You just wouldn't want your game engine's inner loop written in Ruby, for example.
Now, Node.js and JavaScript with V8 is particularly interesting because JavaScript can be made extraordinarily fast. One, you have the V8 compiler which is going to determine the best ways to cache and optimize code. Thank you, Google. Two, you have the architecture which delegates the "inner loop"-type processing to async C++ threads, so there, your code is as basically as optimized as it's going to get already. (How expensive are function calls or marshalling between languages, really? Java does it all the time and it's maybe a fraction slower than C++ except for the cases, again, where people specifically optimize their code.)
So yeah, obviously there's overhead (RTTI, processing) to permit dynamic typing. I'm not soooo sure about garbage collection because that's already being done in modern C++ apps anyways. At least some sort of automated memory management via auto / smart ptrs.
Again, if you are using something like JavaScript how you would a statically typed language, maybe there are things that can be optimized. There will certainly still be overhead, but it appears to be surprisingly minimal for most uses. With something like ASM.js and the right JS compiler, I think you can achieve real desktop client performance now anyways... but don't quote me on that. I haven't followed that particular end of technology in a while.
ASM.js removes a lot of info / safety checks from JS, which makes it more barebones. And there's an important question... Can you remove features of a language at compile-time to allow optimizations? That's a crazy thought. It's what C, C++ allow you to do. C++ is really complex, but so much of that complexity is built on top of these surprisingly powerful and heavily optimized core features. But imagine JS without all of the safety, RTTI information, maybe even remove duck typing capabilities (although that might break the core of how the language functions in other places).
The fact that you get all of these features in JS and it's still fast as hell really confuses me to an extent. It's just crazy to think about. Or how algorithms written in JS or Ruby could actually perform faster than the same algorithm implementations in C or C++. That's weird, right? Just because a language offers ways to accomplish different things out of the box without interference from language, runtime or syntactical restrictions. One might think that shouldn't matter when you're comparing something to C or C++ (I think especially C++ because of its powerful standard library) because you're so close to how processing on the CPU actually works, but it apparently does.
I'd be interested in seeing the resultant processing / ASM from the simplest possible cases when JS or Ruby significantly outperform C or C++ implementations.