363
votes

This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.

The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).

So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.

Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.

30

30 Answers

38
votes

Before January 1st 1970, true and false were the other way around...

38
votes

I'm probably gonna get roasted for this, but:

Making invisible characters syntactically significant in python was a bad idea

It's distracting, causes lots of subtle bugs for novices and, in my opinion, wasn't really needed. About the only code I've ever seen that didn't voluntarily follow some sort of decent formatting guide was from first-year CS students. And even if code doesn't follow "nice" standards, there are plenty of tools out there to coerce it into a more pleasing shape.

37
votes

Null references should be removed from OO languages

Coming from a Java and C# background, where its normal to return null from a method to indicate a failure, I've come to conclude that nulls cause a lot of avoidable problems. Language designers can remove a whole class of errors relate to NullRefernceExceptions if they simply eliminate null references from code.

Additionally, when I call a method, I have no way of knowing whether that method can return null references unless I actually dig in the implementation. I'd like to see more languages follow F#'s model for handling nulls: F# doesn't allow programmers to return null references (at least for classes compiled in F#), instead it requires programmers to represent empty objects using option types. The nice thing about this design is how useful information, such as whether a function can return null references, is propagated through the type system: functions which return a type 'a have a different return type than functions which return 'a option.

37
votes

You don't have to program everything

I'm getting tired that everything, but then everything needs to be stuffed in a program, like that is always faster. everything needs to be webbased, evrything needs to be done via a computer. Please, just use your pen and paper. it's faster and less maintenance.

35
votes

Singletons are not evil

There is a place for singletons in the real world, and methods to get around them (i.e. monostate pattern) are simply singletons in disguise. For instance, a Logger is a perfect candidate for a singleton. Addtionally, so is a message pump. My current app uses distributed computing, and different objects need to be able to send appropriate messages. There should only be one message pump, and everyone should be able to access it. The alternative is passing an object to my message pump everywhere it might be needed and hoping that a new developer doesn't new one up without thinking and wonder why his messages are going nowhere. The uniqueness of the singleton is the most important part, not its availability. The singleton has its place in the world.

35
votes

It's fine if you don't know. But you're fired if you can't even google it.

Internet is a tool. It's not making you stupider if you're learning from it.

34
votes

A picture is not worth a thousand words.

Some pictures might be worth a thousand words. Most of them are not. This trite old aphorism is mostly untrue and is a pathetic excuse for many a lazy manager who did not want to read carefully created reports and documentation to say "I need you to show me in a diagram."

My wife studied for a linguistics major and saw several fascinating proofs against the conventional wisdom on pictures and logos: they do not break across language and cultural barriers, they usually do not communicate anywhere near as much information as correct text, they simply are no substitute for real communication.

In particular, labeled bubbles connected with lines are useless if the lines are unlabeled and unexplained, and/or if every line has a different meaning instead of signifying the same relationship (unless distinguished from each other in some way). If your lines sometimes signify relationships and sometimes indicate actions and sometimes indicate the passage of time, you're really hosed.

Every good programmer knows you use the tool suited for the job at hand, right? Not all systems are best specified and documented in pictures. Graphical specification languages that can be automatically turned into provably-correct, executable code or whatever are a spectacular idea, if such things exist. Use them when appropriate, not for everything under the sun. Entity-Relationship diagrams are great. But not everything can be summed up in a picture.

Note: a table may be worth its weight in gold. But a table is not the same thing as a picture. And again, a well-crafted short prose paragraph may be far more suitable for the job at hand.

33
votes

You need to watch out for Object-Obsessed Programmers.

e.g. if you write a class that models built-in types such as ints or floats, you may be an object-obsessed programmer.

33
votes

Don't write code, remove code!

As a smart teacher once told me: "Don't write code, Writing code is bad, Removing code is good. and if you have to write code - write small code..."

33
votes

There's an awful lot of bad teaching out there.

We developers like to feel smugly superior when Joel says there's a part of the brain for understanding pointers that some people are just born without. The topics many of us discuss here and are passionate about are esoteric, but sometimes that's only because we make them so.

32
votes

It's a good idea to keep optimisation in mind when developing code.

Whenever I say this, people always reply: "premature optimisation is the root of all evil".

But I'm not saying optimise before you debug. I'm not even saying optimise ever, but when you're designing code, bear in mind the possibility that this might become a bottleneck, and write it so that it will be possible to refactor it for speed, without tearing the API apart.

Hugo

31
votes

C++ is a good language

I practically got lynched in another question a week or two back for saying that C++ wasn't a very nice language. So now I'll try saying the opposite. ;)

No, seriously, the point I tried to make then, and will try again now, is that C++ has plenty of flaws. It's hard to deny that. It's so extremely complicated that learning it well is practically something you can dedicate your entire life to. It makes many common tasks needlessly hard, allows the user to plunge head-first into a sea of undefined behavior and unportable code, with no warnings given by the compiler.

But it's not the useless, decrepit, obsolete, hated language that many people try to make it. It shouldn't be swept under the carpet and ignored. The world wouldn't be a better place without it. It has some unique strengths that, unfortunately, are hidden behind quirky syntax, legacy cruft and not least, bad C++ teachers. But they're there.

C++ has many features that I desperately miss when programming in C# or other "modern" languages. There's a lot in it that C# and other modern languages could learn from.

It's not blindly focused on OOP, but has instead explored and pioneered generic programming. It allows surprisingly expressive compile-time metaprogramming producing extremely efficient, robust and clean code. It took in lessons from functional programming almost a decade before C# got LINQ or lambda expressions.

It allows you to catch a surprising number of errors at compile-time through static assertions and other metaprogramming tricks, which eases debugging vastly, and even beats unit tests in some ways. (I'd much rather catch an error at compile-time than afterwards, when I'm running my tests).

Deterministic destruction of variables allows RAII, an extremely powerful little trick that makes try/finally blocks and C#'s using blocks redundant.

And while some people accuse it of being "design by committee", I'd say yes, it is, and that's actually not a bad thing in this case. Look at Java's class library. How many classes have been deprecated again? How many should not be used? How many duplicate each others' functionality? How many are badly designed?

C++'s standard library is much smaller, but on the whole, it's remarkably well designed, and except for one or two minor warts (vector<bool>, for example), its design still holds up very well. When a feature is added to C++ or its standard library, it is subjected to heavy scrutiny. Couldn't Java have benefited from the same? .NET too, although it's younger and was somewhat better designed to begin with, is still accumulating a good handful of classes that are out of sync with reality, or were badly designed to begin with.

C++ has plenty of strengths that no other language can match. It's a good language

31
votes

Source files are SO 20th century.

Within the body of a function/method, it makes sense to represent procedural logic as linear text. Even when the logic is not strictly linear, we have good programming constructs (loops, if statements, etc) that allow us to cleanly represent non-linear operations using linear text.

But there is no reason that I should be required to divide my classes among distinct files or sort my functions/methods/fields/properties/etc in a particular order within those files. Why can't we just throw all those things within a big database file and let the IDE take care of sorting everything dynamically? If I want to sort my members by name then I'll click the member header on the members table. If I want to sort them by accessibility then I'll click the accessibility header. If I want to view my classes as an inheritence tree, then I'll click the button to do that.

Perhaps classes and members could be viewed spatially, as if they were some sort of entities within a virtual world. If the programmer desired, the IDE could automatically position classes & members that use each other near each other so that they're easy to find. Imaging being able to zoom in and out of this virtual world. Zoom all the way out and you can namespace galaxies with little class planets in them. Zoom in to a namespace and you can see class planets with method continents and islands and inner classes as orbitting moons. Zoom in to a method, and you see... the source code for that method.

Basically, my point is that in modern languages it doesn't matter what file(s) you put your classes in or in what order you define a class's members, so why are we still forced to use these archaic practices? Remember when Gmail came out and Google said "search, don't sort"? Well, why can't the same philosophy be applied to programming languages?

31
votes

If a developer cannot write clear, concise and grammatically correct comments then they should have to go back and take English 101.

We have developers and (the horror) architects who cannot write coherently. When their documents are reviewed they say things like "oh, don't worry about grammatical errors or spelling - that's not important". Then they wonder why their convoluted garbage documents become convoluted buggy code.

I tell the interns that I mentor that if you can't communicate your great ideas verbally or in writing you may as well not have them.

30
votes

One I have been tossing around for a while:

The data is the system.

Processes and software are built for data, not the other way around.

Without data, the process/software has little value. Data still has value without a process or software around it.

Once we understand the data, what it does, how it interacts, the different forms it exists in at different stages, only then can a solution be built to support the system of data.

Successful software/systems/processes seem to have an acute awareness, if not fanatical mindfulness of "where" the data is at in any given moment.

30
votes

Design Patterns are a symptom of Stone Age programming language design

They have their purpose. A lot of good software gets built with them. But the fact that there was a need to codify these "recipes" for psychological abstractions about how your code works/should work speaks to a lack of programming languages expressive enough to handle this abstraction for us.

The remedy, I think, lies in languages that allow you to embed more and more of the design into the code, by defining language constructs that might not exist or might not have general applicability but really really make sense in situations your code deals with incessantly. The Scheme people have known this for years, and there are things possible with Scheme macros that would make most monkeys-for-hire piss their pants.

28
votes

Regurgitating well known sayings by programming greats out of context with the zeal of a fanatic and the misplaced assumption that they are ironclad rules really gets my goat. For example 'premature optimization is the root of all evil' as covered by this thread.

IMO, many technical problems and solutions are very context sensitive and the notion of global best practices is a fallacy.

26
votes

I often get shouted down when I claim that the code is merely an expression of my design. I quite dislike the way I see so many developers design their system "on the fly" while coding it.

The amount of time and effort wasted when one of these cowboys falls off his horse is amazing - and 9 times out of 10 the problem they hit would have been uncovered with just a little upfront design work.

I feel that modern methodologies do not emphasize the importance of design in the overall software development process. Eg, the importance placed on code reviews when you haven't even reviewed your design! It's madness.

25
votes

The users aren't idiots -- you are.

So many times I've heard developers say "so-and-so is an idiot" and my response is typically "he may be an idiot but you allowed him to be one."

25
votes

Emacs is better

25
votes

1-based arrays should always be used instead of 0-based arrays. 0-based arrays are unnatural, unnecessary, and error prone.

When I count apples or employees or widgets I start at one, not zero. I teach my kids the same thing. There is no such thing as a 0th apple or 0th employee or 0th widget. Using 1 as the base for an array is much more intuitive and less error-prone. Forget about plus-one-minus-one-hell (as we used to call it). 0-based arrays are an unnatural construct invented by the computer science - they do not reflect reality and computer programs should reflect reality as much as possible.

25
votes

Cowboy coders get more done.

I spend my life in the startup atmosphere. Without the Cowboy coders we'd waste endless cycles making sure things are done "right".

As we know it's basically impossible to forsee all issues. The Cowboy coder runs head-on into these problems and is forced to solve them much more quickly than someone who tries to forsee them all.

Though, if you're Cowboy coding you had better refactor that spaghetti before someone else has to maintain it. ;) The best ones I know use continuous refactoring. They get a ton of stuff done, don't waste time trying to predict the future, and through refactoring it becomes maintainable code.

Process always gets in the way of a good Cowboy, no matter how Agile it is.

24
votes

The more process you put around programming, the worse the code becomes

I have noticed something in my 8 or so years of programming, and it seems ridiculous. It's that the only way to get quality is to employ quality developers, and remove as much process and formality from them as you can. Unit testing, coding standards, code/peer reviews, etc only reduce quality, not increase it. It sounds crazy, because the opposite should be true (more unit testing should lead to better code, great coding standards should lead to more readable code, code reviews should improve the quality of code) but it's not.

I think it boils down to the fact we call it "Software Engineering" when really it's design and not engineering at all.


Some numbers to substantiate this statement:

From the Editor

IEEE Software, November/December 2001

Quantifying Soft Factors

by Steve McConnell

...

Limited Importance of Process Maturity

... In comparing medium-size projects (100,000 lines of code), the one with the worst process will require 1.43 times as much effort as the one with the best process, all other things being equal. In other words, the maximum influence of process maturity on a project’s productivity is 1.43. ...

... What Clark doesn’t emphasize is that for a program of 100,000 lines of code, several human-oriented factors influence productivity more than process does. ...

... The seniority-oriented factors alone (AEXP, LTEX, PEXP) exert an influence of 3.02. The seven personnel-oriented factors collectively (ACAP, AEXP, LTEX, PCAP, PCON, PEXP, and SITE §) exert a staggering influence range of 25.8! This simple fact accounts for much of the reason that non-process-oriented organizations such as Microsoft, Amazon.com, and other entrepreneurial powerhouses can experience industry-leading productivity while seemingly shortchanging process. ...

The Bottom Line

... It turns out that trading process sophistication for staff continuity, business domain experience, private offices, and other human-oriented factors is a sound economic tradeoff. Of course, the best organizations achieve high motivation and process sophistication at the same time, and that is the key challenge for any leading software organization.

§ Read the article for an explanation of these acronyms.

23
votes

Opinion: That frameworks and third part components should be only used as a last resort.

I often see programmers immediately pick a framework to accomplish a task without learning the underlying approach it takes to work. Something will inevitably break, or we'll find a limition we didn't account for and we'll be immediately stuck and have to rethink major part of a system. Frameworks are fine to use as long it is carefully thought out.

23
votes

Generated documentation is nearly always totally worthless.

Or, as a corollary: Your API needs separate sets of documentation for maintainers and users.

There are really two classes of people who need to understand your API: maintainers, who must understand the minutiae of your implementation to be effective at their job, and users, who need a high-level overview, examples, and thorough details about the effects of each method they have access to.

I have never encountered generated documentation that succeeded in either area. Generally, when programmers write comments for tools to extract and make documentation out of, they aim for somewhere in the middle--just enough implementation detail to bore and confuse users yet not enough to significantly help maintainers, and not enough overview to be of any real assistance to users.

As a maintainer, I'd always rather have clean, clear comments, unmuddled by whatever strange markup your auto-doc tool requires, that tell me why you wrote that weird switch statement the way you did, or what bug this seemingly-redundant parameter check fixes, or whatever else I need to know to actually keep the code clean and bug-free as I work on it. I want this information right there in the code, adjacent to the code it's about, so I don't have to hunt down your website to find it in a state that lends itself to being read.

As a user, I'd always rather have a thorough, well-organized document (a set of web pages would be ideal, but I'd settle for a well-structured text file, too) telling me how your API is architectured, what methods do what, and how I can accomplish what I want to use your API to do. I don't want to see internally what classes you wrote to allow me to do work, or files they're in for that matter. And I certainly don't want to have to download your source so I can figure out exactly what's going on behind the curtain. If your documentation were good enough, I wouldn't have to.

That's how I see it, anyway.

23
votes

To produce great software, you need domain specialists as much as good developers.

22
votes

The customer is not always right.

In most cases that I deal with, the customer is the product owner, aka "the business". All too often, developers just code and do not try to provide a vested stake in the product. There is too much of a misconception that the IT Department is a "company within a company", which is a load of utter garbage.

I feel my role is that of helping the business express their ideas - with the mutual understanding that I take an interest in understanding the business so that I can provide the best experience possible. And that route implies that there will be times that the product owner asks for something that he/she feels is the next revolution in computing leaving someone to either agree with that fact, or explain the more likely reason of why no one does something a certain way. It is mutually beneficial, because the product owner understands the thought that goes into the product, and the development team understands that they do more than sling code.

This has actually started to lead us down the path of increased productivity. How? Since the communication has improved due to disagreements on both sides of the table, it is more likely that we come together earlier in the process and come to a mutually beneficial solution to the product definition.

22
votes

If your text editor doesn't do good code completion, you're wasting everyone's time.

Quickly remembering thousands of argument lists, spellings, and return values (not to mention class structures and similarly complex organizational patterns) is a task computers are good at and people (comparatively) are not. I buy wholeheartedly that slowing yourself down a bit and avoiding the gadget/feature cult is a great way to increase efficiency and avoid bugs, but there is simply no benefit to spending 30 seconds hunting unnecessarily through sourcecode or docs when you could spend nil... especially if you just need a spelling (which is more often than we like to admit).

Granted, if there isn't an editor that provides this functionality for your language, or the task is simple enough to knock out in the time it would take to load a heavier editor, nobody is going to tell you that Eclipse and 90 plugins is the right tool. But please don't tell me that the ability to H-J-K-L your way around like it's 1999 really saves you more time than hitting escape every time you need a method signature... even if you do feel less "hacker" doing it.

Thoughts?

22
votes

It's fine if you don't know. But you're fired if you can't even google it.

Internet is a tool. It's not making you stupider if you're learning from it.

22
votes

My most controversial programming opinion is that finding performance problems is not about measuring, it is about capturing.

If you're hunting for elephants in a room (as opposed to mice) do you need to know how big they are? NO! All you have to do is look. Their very bigness is what makes them easy to find! It isn't necessary to measure them first.

The idea of measurement has been common wisdom at least since the paper on gprof (Susan L. Graham, et al 1982)*, when all along, right under our noses, has been a very simple and direct way to find code worth optimizing.

As a small example, here's how it works. Suppose you take 5 random-time samples of the call stack, and you happen to see a particular instruction on 3 out of 5 samples. What does that tell you?

.............   .............   .............   .............   .............
.............   .............   .............   .............   .............
Foo: call Bar   .............   .............   Foo: call Bar   .............
.............   Foo: call Bar   .............   .............   .............
.............   .............   .............   Foo: call Bar   .............
.............   .............   .............   .............   .............
                .............                                   .............

It tells you the program is spending 60% of its time doing work requested by that instruction. Removing it removes that 60%:

...\...../...   ...\...../...   .............   ...\...../...   .............
....\.../....   ....\.../....   .............   ....\.../....   .............
Foo: \a/l Bar   .....\./.....   .............   Foo: \a/l Bar   .............
......X......   Foo: cXll Bar   .............   ......X......   .............
...../.\.....   ...../.\.....   .............   Foo: /a\l Bar   .............
..../...\....   ..../...\....   .............   ..../...\....   .............
   /     \      .../.....\...                      /     \      .............

Roughly.

If you can remove the instruction (or invoke it a lot less), that's a 2.5x speedup, approximately. (Notice - recursion is irrelevant - if the elephant's pregnant, it's not any smaller.) Then you can repeat the process, until you truly approach an optimum.

  • This did not require accuracy of measurement, function timing, call counting, graphs, hundreds of samples, any of that typical profiling stuff.

Some people use this whenever they have a performance problem, and don't understand what's the big deal.

Most people have never heard of it, and when they do hear of it, think it is just an inferior mode of sampling. But it is very different, because it pinpoints problems by giving cost of call sites (as well as terminal instructions), as a percent of wall-clock time. Most profilers (not all), whether they use sampling or instrumentation, do not do that. Instead they give a variety of summary measurements that are, at best, clues to the possible location of problems. Here is a more extensive summary of the differences.

*In fact that paper claimed that the purpose of gprof was to "help the user evaluate alternative implementations of abstractions". It did not claim to help the user locate the code needing an alternative implementation, at a finer level then functions.


My second most controversial opinion is this, or it might be if it weren't so hard to understand.