csanchez wrote: Hi guys, what do you think about Object Oriented Programming? (OOP)
Is it right to use it? Is it wrong?
Neil wrote: It depends on the amount of complexity to reach a certain goal.
OOP on itself is a powerful tool but the way that is taught in schools
(and often the internet) make it something harmful for
the industry as it often adds unneeded complexity;
Usually, the way that they teach you introduces the next problems:
- it makes the code harder to write
- it makes the code harder to process (by human and/or machine)
- it makes the code harder to maintain
The thing is that often those people are taught that the only way to make
readable and good code is to use OOP, and not just normal OOP but entire
dependency trees that end up in painful situations.
They're taught that directly writing and/or reading data is complicated and
easy to get yourself in trouble, so they write huge abstractions with all sorts
of crazy overengineering only to make things SEEM simpler.
Like, instead of a Dog class depending on an Animal class, you have a _protoAnimal
class, an Animal class, a Canine class, and classes like Legs and Eyes which go inside
other classes.
Asides from that, OOP gets a bad rep for all the problems I mentioned, while actual
OOP isn't as near as complicated if you know how to do it right, it can be a neat
tool to simplify things, it's just that overengineering is a standard.
Mysoft wrote: My problem with OOP is not the paradigm, but with the language features around that, for example:
dim as MyObject A
would cause a "function" to be called and if you would do:
A += B
another function would be called and other implicit stuff like that.
First case is a constructor, Second case is an operator overload
Anyway I think Neil's lines make sense.
But another thing about it is when a language forces you to use classes and OOP stuff,
where it goes bonkers, because it removes the choice, with the extra complexity.
That is more for the compiler to handle, and as result less the programmer knows, dumber it gets.
So if one would use OOP without having "automatic" facilities, to the point that
they still have to manually call the constructor (or the function that the create
the object that will implicit initialize it there) and etc... stuff that forces
you to see/handle the inners of the object, then you would end with code that you
wouldnt want to deal with it; and if that happens it means you're doing wrong using OOP
Its like the economy of the world.
That relies on "slaves" doing the hard work to make it easy for the powerful people
you see the problem there?
csanchez wrote: quotation marks?
Mysoft wrote: Yeah because the slave point where the abuse happens is the computer and hardware,
but it does not change the picture
In my opinion just like a good leader is the one that sets the example,
A good leading programmer is the one that code setting the example,
instead of passing the responsability to the hardware.
csanchez wrote: But they are not Slaves, they are Constructors, heh.
Mysoft wrote: Slave is the hardware that is forced to comply with this $%#@.
Constructor is just the current title made to give them some value while being forced to the dirty job.
Another analogy would be with full metal alchemist.
When he almost made the stone using live humans, thinking they were
just that red water, because the actual work was hidden.
Neil wrote: As I am reading this right now, I agree with Mysoft as he said that languages that
enforce OOP tend to overcomplicate things as they lead the developer to follow
harmful ideas I mentioned already in my argument.
I disagree that the compiler handling everything makes the developer dumber, but rather,
it makes the developer not know what's happening at the core of their code.
It's not about lack of intelligence, but rather, lack of knowledge of what your program is
even doing (which also affects newbies as they mislearn how things work, on where Mysoft IS right).
Compilers should NOT guide the user to writing complex code, instead, compilers should guide the
developer to handle things properly, to know what their code is doing at all times, to know
that the code might be doing something extremely cursed and even lead the developer to write
direct and thus efficient code which does not rely on massive amounts of abstraction for the
sake of abstraction; alongside protecting the program's integrity by handling errors and possible
problems properly.
And I don't think abstraction is inherently wrong either, we all use abstraction.
I just think that in several examples of state-of-the-art software, we can see that they're
flooded with weird ideas about how software must be written AROUND abstraction instead of
writing necessary abstraction AROUND fixing a real problem for what your code NEEDS;
It becomes a distraction.
I am not against constructors either, but I do think that they do offer a layer of abstraction
that hides away what the code is intending to do and/or the data it's processing. they exist since
before C even existed and they're okay as long as they're fairly explicit with what they're meant
to do in more complex data structures; though sincerely this last point is just my personal preference.
Oh, forgot to mention that depending on hardware for certain tasks is actually a better idea than not.
As an Example, OpenGL is known for being very slow and unefficient in most hardware as it's abstracted
in a way that makes things hard to track on and hard to process for both the CPU and GPU when compared
to Vulkan, DirectX, Metal, etc.
I am not saying that software should be written to only support updated hardware but rather have
actual care about the target demographic and focus on optimization for the hardware the software is
meant to be ran on, instead of just forcing consumerism and making people pay for new hardware.
(I heavily recommend BGFX for this kind of task as it abstracts both old and new rendering pipelines
so it always has the best performance in any hardware that it runs on, at the point of having negative
overhead in most cases compared to using pure rendering APIs while also saving you from the pain lol)
It's pretty funny how MacOS' apps tend to be covered in OOP yet the bare frameworks
that make them are purely imperative and data oriented (like Accelerate)
Interest links:Chris wrote: Indeed there are very good programmers that reject OOP and most of the time the rejection will be related
on how it is implemented, bloatware and the current trend of anti-pattern practices against the objects.
I believe OOP must be used on memory persistent apps with interfaces, that will apply to any windows app
with user interface, linux GUI, or whatever you can imagine with an interface with a running process, for
a programmer is very easy to understand all parts involved as objects because is natural.
I reject the new trend about CLOSURES that allow to manipulate a private class member from the outside
or any other trendy method to create anti-pattern practices on OOP, function injection and such must be
avoided.
On the other side you have stateless apps, these are created to output some code and then die, the best
example is the Webservices, UNIX scripts, text parsers, PHP Backend, Python Backend..etc, so for this kind
of environment I will always try the simplest code with a minimal approach that is also easy to read and
the best for this is a functional / procedural approach, the problem with OOP is that most of the time you
will be creating objects that at the end will be destroyed just to output JSON or HTML.
Most of the time OOP will involve a ORM to manipulate a database, for the web this is overkill, you don't
need these layers of abstraction, you don't need to abstract the database, you don't need to create an
object representing a row in the database, writing your own optimized SQL and using arrays will be better
in every way.
So at the end I am against over-architecture, bloated code and current way of doing things.
Full stack developers? they need them because they are using many tools (third party tools) that they
barely know how to use or maintain, at some point the final product will become abandon-ware if a small
part of their stack is deprecated.
Developers talking about DEPENDENCY? they care so much about it because a hello world app will have many
of them and they don't even know why..
Think about any software out there, in the 90's you could fit big apps in a floppy disk of less than
1mb, now they eat gigabytes like there is no tomorrow, developers needed to code in a very optimal way
to fit the app in memory, now they don't care, why? because we all have supercomputers compared to what
we had years ago.
So for a guy like me looking at current "tech" is like, WTF, what they are smoking, also I have the same
feeling for the current way of living, I am not even that old but the today's world is crazy..
Just taking a look to an average "smart phone" out there with a 2Ghz CPU and 4 cores, how they can be
so slow? the problem is not the hardware, the problem is the software, it is getting worse every day and
they don't even care, they just fill the OS with crap-ware and bloat-ware and smile..
Hoa this was fun!!!
https://thenewstack.io/why-are-so-many- ... ogramming/