OOP Criticism
category: general [glöplog]
That was an answer to blala :) But not a very serious one though.
Tea, earl grey, hot. Thats my kind of programming.
Also being able to reroute the nacels through a small kitten, reverse its polarity and then visualise the result on a fucking mega screen (or holodeck) using just an array of pink and yellow buttons on a touch screen ... the ultimate demo tool I say.
Also being able to reroute the nacels through a small kitten, reverse its polarity and then visualise the result on a fucking mega screen (or holodeck) using just an array of pink and yellow buttons on a touch screen ... the ultimate demo tool I say.
OOP == platonician philosophy.
CP: well when coding i think more than i type, so autocomplete is not an issue. Also all the haskell libraries out there can be searched by polymorphic type signatures, though i don't use that either. And the haskell ide projects (which again i don't use) has some form of autocompletion anyway.
object oriented INTERCAL is the only true form of object oriented programming.
Quote:
Hm. Is grouping data and function (object) pointing the main idea of OOP? I can group things in functions (which I do).
Many would probably say it's not the main point, but it is an aspect of object orientation. If you have some data that's logically connected in your program, you will likely arrange it in a structure. If you have some functions that manipulate the data, you will most often end up passing a pointer to the data as an argument:
invert( &myMatrix );
The equivalent in OOP:
myMatrix.invert();
The difference being that the function is now called a "method", the data structure definition is called a "class", and an instance of a class is called an "object". Also, the method is declared inside of the class definition and the calling syntax reflects this association of code and data.
As an example, standard C file handles behave a lot like objects. fopen() gives you a handle, and fread(), fwrite() etc. require you to pass that handle to each function so the library knows what the context is.
You're probably initialising your structures, too. In OOP that would be done in a dedicated method (constructor) which is implicitly called whenever an object is instanced. When the object falls out of scope or is otherwise released, another dedicated method (destructor) is automatically called so the object in turn can release any memory or file handles or other objects it is referencing. The benefit is neater code and you worrying less about allocating and freeing up resources. But it's not a completely new way of thinking about code.
And it goes on like that. Most of what OOP is you're probably already doing, but in your own way. Native support for it in the language just makes your job easier and makes it easier to collaborate with others, especially people who weren't in on the project from the beginning. In turn it's easier for you to incorporate someone else's code into your work.
Of course you can extend the methodology into a whole coding philosophy and get all worked up about it, but it's valid just to think of it as standardisation. Like the "data oriented programming" that the GD article discusses: language support would be great, but everyone is already doing it anyway (to variable extents and in their own way).
Quote:
I would have functions for a database in one file, functions for working with user input in the other, etc. It is very convenient and I do not tie those things up in hierarchy.
Except you probably do, implicitly. Or you might more and more as your projects grow. Imagine if you want to work on two copies of the same database. Do you copy the file with the database functionality and suffix all symbol names with "_2"? How much easier wouldn't that be if all the variables relating to the database were declared inside a context like a structure, which you could then simply instance twice? If you're doing that already, you're already doing OOP, and you might as well look at what language support there is.
Now, apart from neater, more standardised code, full-on OOP does let you do things that are very hard otherwise. Advanced resource management, smart pointers, reference counting, runtime debugging, and so on. These are all a LOT easier in an OOP language.
Not to sound born-again or anything. OOP isn't the best paradigm when you have "lots" of something (vertices, particles, characters, etc.), as explained in the GD article. But you can still build an efficient object-oriented framework inside which huge datasets are treated in a more "data oriented" way.
Quote:
i don't code my stuff with pauses of a year.
also, comments.
comments are crap. they don't help anything, except maybe for the "big picture" of an application and its modules.
if you follow the rule to write and structure your code in a way that you can read it top-down just like a book, then you can grasp what it does alot faster than by reading (most probably outdated and too few) comments.
1337 !
yeah, and having other people read your code is just as craaaap anyway.
Plus, you must not only be sure your code is OO-free and commentless, but also implements as many Anti-pattern as you will be able to fill it with. Also, all stack-based process with return-to-function markers should be deprecated to benefit from a correct Spaghetti Code architecture.
Louigi, what is your own definition of OOP?
I'm not sure if I agree about comments being crap. Remember that function, class and variable names are also a form of comment. And just like other comments, they can become inaccurate and out of date if you don't bother updating them. The only way you can have completely commentless code is to use var000, var001, fnc000, fnc001 names, like a disassembler.
I would say that comments are useful in cases where you are specifying the behaviour of a function/class/method, or the invariant properties of a variable, from the perspective of their users, so that you can use those items without having to read the code (and in some cases, reading the code is impossible). The good thing about these kinds of declarations is that they say, "hey, this is the expected behaviour of this class/function/variable, so if you change the code such that these comments aren't true, you also have to update all of the code that uses this item." So, the danger isn't just that the comment becomes out of date. If the comment is out of date, merely changing the comment doesn't fix anything, it just indicates a wider problem.
It also helps you to figure out which side of the wall a bug is on. If your program has a bug in it, is the bug in a particular module, or is the module behaving correctly and the external user is using it wrong? This is determined by the comments: if your module is consistent with the comments, the bug is not in there. It must be that it is being used incorrectly. This is especially important if there are more than one users of the same module (and especially when the users of these modules are widespread across the world and don't know of each other).
Of course, this requires comments to be formalized and to exist in predictable locations. This is different from just littering the code with comments, line by line, for no reason.
I would say that comments are useful in cases where you are specifying the behaviour of a function/class/method, or the invariant properties of a variable, from the perspective of their users, so that you can use those items without having to read the code (and in some cases, reading the code is impossible). The good thing about these kinds of declarations is that they say, "hey, this is the expected behaviour of this class/function/variable, so if you change the code such that these comments aren't true, you also have to update all of the code that uses this item." So, the danger isn't just that the comment becomes out of date. If the comment is out of date, merely changing the comment doesn't fix anything, it just indicates a wider problem.
It also helps you to figure out which side of the wall a bug is on. If your program has a bug in it, is the bug in a particular module, or is the module behaving correctly and the external user is using it wrong? This is determined by the comments: if your module is consistent with the comments, the bug is not in there. It must be that it is being used incorrectly. This is especially important if there are more than one users of the same module (and especially when the users of these modules are widespread across the world and don't know of each other).
Of course, this requires comments to be formalized and to exist in predictable locations. This is different from just littering the code with comments, line by line, for no reason.
there's a simple rule of thumb for comments: to add potentially useful information that isn't already implied by the (local scope) code, or even simpler: anything helpful that isn't obvious
then again, just like code, comments need maintance too.
then again, just like code, comments need maintance too.
Yes, there are also a large number of things you can do to reduce the amount of comments.
For example, if the comment that declares the behaviour of a function has to warn about some strange side effect of the function, just put the extra effort into eliminating that side effect or making it safe to ignore. If the comments have to detail at length what has to be initialized before the function works, arrange things into classes and use RIAA to make sure it's impossible for those things to be uninitialized when the function is called.
One thing I do that many people find surprising is that I use typedefs for the sole purpose of clarifying the semantic meaning of a variable. To illustrate, a completely beginner programmer might write:
To reduce the amount of comments, you can do:
Obviously, the savings increase as you have more and more screen coordinates in your program. This is even without the obvious step of making a SCRPOS struct.
If you use classes instead of typedefs, you can get additional benefits, like making sure you don't pass the wrong type of int into a function that is expecting a screen x-coordinate. Having a SCRPOS class, derived from a POS template class, has even further benefits that should be obvious. Even if a SCRPOS differs from POS only by its semantic meaning, it prevents the erroneous use of positions in other coordinate spaces (such as relative to a window within the screen) in places where screen positions are expected. They must be explicitly converted. The declarations also implicitly specify that it refers to a position on the screen, and not in some other coordinate space.
For example, if the comment that declares the behaviour of a function has to warn about some strange side effect of the function, just put the extra effort into eliminating that side effect or making it safe to ignore. If the comments have to detail at length what has to be initialized before the function works, arrange things into classes and use RIAA to make sure it's impossible for those things to be uninitialized when the function is called.
One thing I do that many people find surprising is that I use typedefs for the sole purpose of clarifying the semantic meaning of a variable. To illustrate, a completely beginner programmer might write:
Code:
int DotX; // x position on the screen of the dot
int DotY; // y position on the screen of the dot
To reduce the amount of comments, you can do:
Code:
typedef int SCRCOO; // A coordinate on the screen
typedef SCRCOO SCRCOOX; // An x-coordinate on the screen
typedef SCRCOO SCRCOOY; // A y-coordinate on the screen
SCRCOOX DotX;
SCRCOOY DotY;
Obviously, the savings increase as you have more and more screen coordinates in your program. This is even without the obvious step of making a SCRPOS struct.
If you use classes instead of typedefs, you can get additional benefits, like making sure you don't pass the wrong type of int into a function that is expecting a screen x-coordinate. Having a SCRPOS class, derived from a POS template class, has even further benefits that should be obvious. Even if a SCRPOS differs from POS only by its semantic meaning, it prevents the erroneous use of positions in other coordinate spaces (such as relative to a window within the screen) in places where screen positions are expected. They must be explicitly converted. The declarations also implicitly specify that it refers to a position on the screen, and not in some other coordinate space.
"lol"... not RIAA, but RAII.
Things to put in your comments:
Why, not how.
Why you choose a O(n2) algorithm, and why it was ok.
Why it's important to lock here, and not afterwards.
Why it's a recursive descent parser.
Things like that.
Why, not how.
Why you choose a O(n2) algorithm, and why it was ok.
Why it's important to lock here, and not afterwards.
Why it's a recursive descent parser.
Things like that.
also avoid TODO comments, since they are very difficult to get rid of
I also like comments as a way of separating distinct pieces of code.
With the comments highlighted you get code that's easy to find your way around. And I don't see anything wrong with actually explaining what you're doing in the comments, either, using them like headers or short summaries, even where it's plainly obvious what the code is doing. It's not that much extra typing.
Code:
// Zero out all global variables
memset( &G, 0, sizeof( G ) );
// Switch to low fragmentation heap (won't work in debugger)
ULONG setLFHeap = 2;
G.lowFragHeap = HeapSetInformation( (void *) GetProcessHeap(),
HeapCompatibilityInformation,
&setLFHeap,
sizeof( unsigned int ) );
// Initialise globals
G.serialCounter = new SerialCounter();
G.debug = new Debug();
G.error = new Error();
// Global transform table (maths.h)
g_initTransformTable();
// Setup some allocators
G.mallocator = new Mallocator();
G.gfxMallocator = new Mallocator();
G.obContact = new ObjectBlock <Contact> ();
G.obContact->overrideObjectsPerBlock( 16384 );
G.obShape = new ObjectBlock <Shape> ();
G.obRigidBody = new ObjectBlock <RigidBody> ();
G.obSpatialHashEdge = new ObjectBlock <SpatialHashEdge> ();
G.obSpatialHashEdge->overrideObjectsPerBlock( 65536 );
G.obIsland = new ObjectBlock <Island> ();
With the comments highlighted you get code that's easy to find your way around. And I don't see anything wrong with actually explaining what you're doing in the comments, either, using them like headers or short summaries, even where it's plainly obvious what the code is doing. It's not that much extra typing.
And my code has lots of TODO comments. I find it useful to know when I'm going off in a different direction and leaving something that should be seen to eventually. The only way I can see they'd be difficult to get rid of is if the code isn't easy to revisit, which shouldn't be the case anyway.
To avoid those kinds of "announce what's coming up in the next block" comments, a lot of people would recommend putting each of those blocks in their own function. That way your code would be:
The functions would be defined below that function, probably as private functions. If you, at some point in time, happen to be interested in the details of one of those functions, you can just go to that function definition (in some IDEs, right-click on the name and select "go to definition").
In any case, I'm sure a lot of these explicit "new" and "delete" could be removed by using constructors and destructors, using RAII.
Code:
ZeroOutGlobals ();
SwitchToLowFragmentationHeap ();
InitialiseGlobals ();
g_initTransformTable ();
SetUpAllocators ();
WhateverTheOtherThingsHaveInCommon ();
The functions would be defined below that function, probably as private functions. If you, at some point in time, happen to be interested in the details of one of those functions, you can just go to that function definition (in some IDEs, right-click on the name and select "go to definition").
In any case, I'm sure a lot of these explicit "new" and "delete" could be removed by using constructors and destructors, using RAII.
I think I already do most of that RAII stuff. Like, closing files in destructors, etc. And I tend to do what that RAII article recommends for plain C. Like in a constructor:
SomeClass someObject = new SomeClass( blah );
if( !someObject || someObject->didntInitialiseProperly() ) goto fail;
SomeOtherClass someOtherObject...
...
initialisedProperly = true;
return;
fail:
delete someObject;
delete someOtherObject;
...
initialisedProperly = false;
return;
Maybe it's bad form, but I almost always handle errors explicitly and completely ignore try/throw/catch, just as I like to do memory allocation explicitly with these two magic lines in class definitions:
static inline void* operator new( size_t size );
static inline void operator delete( void *p );
:) But my objects do clean up after themselves when deleted. I even tend to use smart containers (so objects automatically remove themselves from lists when they're deleted and so on).
I'm not quite sure what condition RAII is supposed to cure, but I don't think I have a bad case of it, anyway. I never run into memory/handle leaks and that sort of thing.
SomeClass someObject = new SomeClass( blah );
if( !someObject || someObject->didntInitialiseProperly() ) goto fail;
SomeOtherClass someOtherObject...
...
initialisedProperly = true;
return;
fail:
delete someObject;
delete someOtherObject;
...
initialisedProperly = false;
return;
Maybe it's bad form, but I almost always handle errors explicitly and completely ignore try/throw/catch, just as I like to do memory allocation explicitly with these two magic lines in class definitions:
static inline void* operator new( size_t size );
static inline void operator delete( void *p );
:) But my objects do clean up after themselves when deleted. I even tend to use smart containers (so objects automatically remove themselves from lists when they're deleted and so on).
I'm not quite sure what condition RAII is supposed to cure, but I don't think I have a bad case of it, anyway. I never run into memory/handle leaks and that sort of thing.
Graga: this is a good question. And my answer would be that at this moment I am not so sure I can give a definition, after all this analysis and partial confusion. And that considering that I do know all the basics and I did write my own classes and used them.
But to try to explain it in simple words, here's what I think it goes down to.
OOP is not one paradigm. It is a set of several data abstractions mechanisms. The programmer can create classes, which can be constructed into a complicated system by using inheritance and method visibility and such. An instance of a class is called an object.
Each OOP approach may include all the techniques or exclude some of them.
To me the main visible feature of OOP is a very high level of abstraction and inheritance.
But to try to explain it in simple words, here's what I think it goes down to.
OOP is not one paradigm. It is a set of several data abstractions mechanisms. The programmer can create classes, which can be constructed into a complicated system by using inheritance and method visibility and such. An instance of a class is called an object.
Each OOP approach may include all the techniques or exclude some of them.
To me the main visible feature of OOP is a very high level of abstraction and inheritance.
"very high"
doom, you should avoid using direct memory allocations (i.e. new, malloc, calloc, etc.)
in the general case: use smart pointers and put stuff on the heap and let the compiler decide what to do.
concerning the matrix invert function, you've placed in the matrix class out of conveinience not out of oop-practices. if i were you i'd rather put it as a template function out side the class, i.e. invert<4,4>(matrix);
in the general case: use smart pointers and put stuff on the heap and let the compiler decide what to do.
concerning the matrix invert function, you've placed in the matrix class out of conveinience not out of oop-practices. if i were you i'd rather put it as a template function out side the class, i.e. invert<4,4>(matrix);