Why You Should Use OpenGL And Not DirectX


Often, when we meet other game developers and say that we use OpenGL for our game Overgrowth, we're met with stares of disbelief -- why would anyone use OpenGL? DirectX is the future. When we tell graphics card representatives that we use OpenGL, the temperature of the room drops by ten degrees.
This baffles us. It's common geek wisdom that standards-based websites, for instance, trounce Silverlight, Flash, or ActiveX. Cross-platform development is laudable and smart. No self-respecting geek enjoys dealing with closed-standard Word documents or Exchange servers. What kind of bizarro world is this where engineers are not only going crazy over Microsoft's latest proprietary API, but actively denouncing its open-standard competitor?
Before we dive into the story of why we support OpenGL, let's start with a bit of history:

What is OpenGL?

OpenGL

In 1982, Silicon Graphics started selling high-performance graphics terminals using a proprietary API called Iris GL(GL is short for "graphics library"). Over the years, Iris GL grew bloated and hard to maintain, until Silicon Graphics took a radical new step: they completely refactored Iris GL and made it an open standard. Their competitors could use the new Open Graphics Library (OpenGL), but in return, they had to help maintain it and keep it up to date.
Today, OpenGL is managed by the Khronos Group -- a non-profit organization with representatives from many companies that are interested in maintaining high-quality media APIs. At a lower level, it's managed by the OpenGL Architecture Review Board (ARB). OpenGL is supported on every gaming platform, including Mac, Windows, Linux, PS3 (as a GCM wrapper), Wii, iPhone, PSP, and DS. Well, every gaming platform except for the XBox -- which brings us to our next topic:

What is DirectX?


Ever since MS-DOS, Microsoft has understood that games play an important role in users' choice of operating systems. For this reason, in 1995, they created a proprietary set of libraries in order to encourage exclusive games for their new Windows 95 operating system. These libraries included Direct3D, DirectInput and DirectSound, and the entire collection came to be known as DirectX. When Microsoft entered the gaming market in 2001, it introduced the DirectX Box, or XBox for short. The XBox was a loss leader (losing over 4 billion dollars), intended to set the stage to dominate the games market in the next generation.
Looking at the games scene now, it's clear that this strategy is succeeding. Most major PC games now use DirectX, and run on both Windows and XBox 360. With few exceptions, they don't work on competing platforms, such as Playstation, Mac OS, and Wii. These are significant markets to leave out, bringing us to the big question:

Why does everyone use DirectX?

Everyone uses DirectX because API choice in game development is a positive feedback loop, and it was shifted in favor of DirectX in 2005.
It's a positive feedback loop because whenever one API becomes more popular, it keeps becoming more and more popular due to network effects. The most important network effects are as follows: the more popular API gets better support from graphics card vendors, and graphics programmers are more likely to already know how to use it.
API use was shifted in favor of DirectX by Microsoft's two-pronged DirectX campaign around the launch of XBox 360 and Windows Vista, including the spread of FUD (fear, uncertainty and doubt) about the future of OpenGL, and wild exaggeration of the merits of DirectX. Ever since then, the network effects have amplified this discrepency until OpenGL has almost disappeared entirely from mainstream PC gaming.

1. Network effects and vicious cycles


On Windows, it's a fact that the DirectX graphics drivers are better maintained than the OpenGL graphics drivers. This is caused by the vicious cycle of vendor support. As game developers are driven from OpenGL to DirectX by other factors, the graphics card manufacturers (vendors) get less bug reports for their OpenGL drivers, extensions and documentation. This results in shakier OpenGL drivers, leading even more game developers to switch from OpenGL to DirectX. The cycle repeats.
Similarly, it's a fact that more gaming graphics programmers know how to use DirectX than OpenGL, so it's cheaper (less training required) to make a game using DirectX than OpenGL. This is the result of another vicious cycle: as more game projects use DirectX, more programmers have to learn how to use it. As more programmers learn to use it, it becomes cheaper to for game projects to use DirectX than to use OpenGL.

2. FUD about OpenGL and Vista


Microsoft initiated a fear, uncertainty, and doubt (FUD) campaign against OpenGL around the release of Windows Vista. In 2003, Microsoft left the OpenGL Architecture Review Board -- showing that they no longer had any interest in the future of OpenGL. Then in 2005, they gave presentations at SIGGRAPH (special interest group for graphics) andWinHEC (Windows Hardware Engineering Conference) giving the impression that Windows Vista would remove support for OpenGL except to maintain back-compatibility with XP applications. This version of OpenGL would be layered on top of DirectX as shown here, (from the HEC presentation) causing a dramatic performance hit. This campaign led to panic in the OpenGL community, leading many professional graphics programmers to switch to DirectX.
When Vista was released, it backpedaled on its OpenGL claims, allowing vendors to create fast installable client drivers (ICDs) that restore native OpenGL support. The OpenGL board sent out newsletters proving that OpenGL is still a first-class citizen, and that OpenGL performance on Vista was still at least as fast as Direct3D. Unfortunately for OpenGL, the damage had already been done -- public confidence in OpenGL was badly shaken.

3. Misleading marketing campaigns


The launch strategies for Windows Vista and Windows 7 were both accompanied with an immense marketing push by Microsoft for DirectX, in which they showed 'before' and 'after' screenshots of the different DirectX versions. Many gamers now think that switching from DirectX 9 to DirectX 10 magically transforms graphics from stupidly dark to normal (as in the comparison above), or from Halo 1 to Crysis. Game journalists proved that there was no difference between Crysis DX9 and DX10, and that its "DX10" features worked fine with DX9 by tweaking a config file. However, despite its obvious inaccuracy, the marketing has convinced many gamers that DirectX updates are the only way to access the latest graphics features.
While many games participate in Microsoft's marketing charade, more savvy graphics programmers like John Carmack refuse to be swept up in it. He put it this way, "Personally, I wouldn’t jump at something like DX10 right now. I would let things settle out a little bit and wait until there’s a really strong need for it."

So why do we use OpenGL?

Given that OpenGL is has less vendor support, is no longer used in games, is being actively attacked by Microsoft, and has no marketing momentum, why should we still use it? Wouldn't it be more profitable to ditch it and use DirectX like everyone else? No, because in reality, OpenGL is more powerful than DirectX, supports more platforms, and is essential for the future of games.

1. OpenGL is more powerful than DirectX


It's common knowledge that OpenGL has faster draw calls than DirectX (see NVIDIA presentations like this one if you don't want to take my word for it), and it has first access to new GPU features via vendor extensions. OpenGL gives you direct access to all new graphics features on all platforms, while DirectX only provides occasional snapshots of them on their newest versions of Windows. The tesselation technology that Microsoft is heavily promoting for DirectX 11 has been an OpenGL extension for three years. It has even been possible for years before that, using fast instancing and vertex-texture-fetch. I don't know what new technologies will be exposed in the next couple years, I know they will be available first in OpenGL.
Microsoft has worked hard on DirectX 10 and 11, and they're now about as fast as OpenGL, and support almost as many features. However, there's one big problem: they don't work on Windows XP! Half of PC gamers still use XP, so using DirectX 10 or 11 is not really a viable option. If you really care about having the best possible graphics, and delivering them to as many gamers as possible, there's no choice but OpenGL.

2. OpenGL is cross-platform


More than half of our Lugaru users use Mac or Linux (as shown in this blog post), and we wouldn't be surprised if the same will be true of our new game Overgrowth. When we talk to major game developers, we hear that supporting Mac and Linux is a waste of time. However, I've never seen any evidence for this claim. Blizzard always releases Mac versions of their games simultaneously, and they're one of the most successful game companies in the world! If they're doing something in a different way from everyone else, then their way is probably right.
As John Carmack said when asked if Rage was a DirectX game, "It’s still OpenGL, although we obviously use a D3D-ish API [on the Xbox 360], and CG on the PS3. It’s interesting how little of the technology cares what API you’re using and what generation of the technology you’re on. You’ve got a small handful of files that care about what API they’re on, and millions of lines of code that are agnostic to the platform that they’re on." If you can hit every platform using OpenGL, why shoot yourself in the foot by relying on DirectX?
Even if all you care about is Windows, let me remind you again that half of Windows users still use Windows XP, and will be unable to play your game if you use the latest versions of DirectX. The only way to deliver the latest graphics to Windows XP gamers (the single biggest desktop gaming platform) is through OpenGL.

3. OpenGL is better for the future of games


OpenGL is a non-profit open standard created to allow users on any platform to experience the highest quality graphics that their hardware can provide. Its use is being crushed by a monopolistic attack from a monolithic corporate giant trying to dominate an industry that is too young to protect itself. As Direct3D becomes the only gaming graphics API supported on Windows, Microsoft is gaining a stranglehold on PC gaming.
We need competition and freedom to drive down prices and drive up quality. A Microsoft monopoly on gaming would be very bad for both gamers and game developers.

Can OpenGL recover?

Back in 1997, the situation was similar to how it is now. Microsoft was running a massive marketing campaign for Direct3D, and soon everyone "just knew" that it was faster and better than OpenGL. This started to change whenChris Hecker published his open letter denouncing DirectX. Soon after that, John Carmack posted his famousOpenGL rant, and put his money where his mouth was by implementing all of Id Software's games in OpenGL, proving once and for all that DirectX was unnecessary for high-end 3D gaming.
This lesson appears to have been forgotten over the last few years. Most game developers have fallen under the spell of DirectX marketing, or into the whirlpool of vicious cycles and network advantages. It's time to throw off the veil of advertisements and buzzwords, and see what's really happening. If you use DirectX, you have to choose between using the weak, bloated DirectX 9 or sacrificing most of your user-base to use DirectX 10 or 11.
On the other hand, if you use OpenGL, you get faster and more powerful graphics features than DirectX 11, and you get them on all versions of Windows, Mac and Linux, as well as the PS3, Wii, PSP, DS, and iPhone. You also get these features in the rapidly-developing WebGL standard, which may become the foundation for the next generation of browser games.
If you're a game developer, all I ask is that you do the research and compare the figures, and decide if OpenGL is a better choice. Some programmers prefer the style of the DirectX 11 API to OpenGL, but you're going to be wrapping these low-level APIs in an abstraction layer anyway, so that shouldn't be a deciding factor. If there's anything about OpenGL that you don't like, then just ask the ARB to change it -- they exist to serve you!
If you're a gamer who uses Windows XP, Mac, or Linux, I hope you can see that DirectX only exists in order to keep new games from reaching your platform, and the only way you can fight back is to support games that use OpenGL.

Taken from: http://linkive.com/home/browser/MTU5NDU0/Comedy/Why%20You%20Should%20Use%20OpenGL%20And%20Not%20DirectX
Superman, a super hero wearing tight pants and sporting a dorky haircut. The ultimate epitomization of repressed sexuality in a world of puritanism and double standards wrapped in lackluster bubble of unimaginative depression of the early-to-mid 20th century.
But what if we forget Superman's shiny red latex (not LaTeX) and focus on the actual character. If we believe the comics books and the four (or five) rather bad movies created, Superman is meant to be our savior. However, what the publishers of the entertainment brochures and the silver screen features did not want you to know is that Superman is one big menace to modern society.

Now, let's see why you should not befriend the nerdy journalist/superhero guy.

Superman is deadly to those he rescues

I can't take any credit for this. In his Unified Theory of Superman (PDF), Ben Tippett smartly notes the fact Superman does not dip when he catches people plummeting from the sky. Now, if we focus one moment on the basic laws of physics, including the conservation of momentum, this means that Superman is as soft and inviting to those he catches as the patch of solid asphalt that people would have hit instead of our would-be superhero.
This is what you would look like if Superman caught you, only probably a lot less tasty! And maybe not so neatly arranged on the BBQ grille, hmmm ...
Think about it. You're hurling at 300kph and then you brake to a miraculous standstill instantly. Your internal organs would suffer irreparable damage due to deceleration, exceeding the acceptable human limit of about 25g.
Without Superman performing a very intricate rescue, where he matches the speed of the falling victim, then slowly brakes his vertical descent, the effect on the victims would be the same as slamming head-first into a concrete wall or even worse, because while concrete is known to crack in some circumstances, Superman is not. What more, having been caught in Superman's arms also prevents you from bouncing off, which could, potentially help dissipate some of the impact energy, reducing the damage to your organs. With your body flattened against Superman's, the recoil of the impact would be contained entirely inside your anatomy, amplifying the damage. You would probably undergo resonance of some sort, which would effectively turn your internal organs into soup.
There you go, proof #1. Superman is deadly to those he rescues. You definitely do not need Superman to be there when you die. Somehow, the sight of his oily, dorky haircut makes the dying experience all the worse. At least solid concrete has no pretensions about what it is.
Hitting the ground or Superman, it's pretty much the same. Only shiny latex feels more embarrassing.

Superman is deadly to those he ogles nastily with his X-ray perverticles

Perverticles are spectacles worn by perverts. If you're wondering what the exact definition of a pervert is, it's a person who feels ashamed of his/her deeds yet is compelled to do them. Example, someone wearing sunglasses in a restaurant, so they can comfortably check out the merchandise without being noticed. Hence, the catchy phrase perverticles, which I just coined in. Feel free to use, GPL style.
Clark Kent, Superman's nerdy alter ego, is a guy wearing thick-rimmed 50's era spectacles. Sometimes, he happens to be wearing them when using his X-ray vision. Combined, you get perverticles.
Most people have no idea what X-rays are, so I'll elaborate here a little.
X radiation, which is formed of so-called X rays (or particles) is a form of the electromagnetic radiation, characterized by a very short wavelength, high frequency and high energy, all of which are synonymous. To be classified as X ray, a photon has to have the wavelength of about 0.01 to 10 nanometers, corresponding to energy of about 100eV to 150KeV. For reference, visible light has a wavelength of about 500 nanometers and the energy of only about 2.5eV.

In practical terms, energy means ability to penetrate objects; the more energetic the particle is, the more deeply it can go through objects, including solid mass that is opaque to visible light.
X rays are used widely in many branches of industry, including medical, military and security. For example, you have the X rays machines in hospitals used to scan patiences for fractures, CT machines used for angiography, powerful X ray scanners used for detecting fatigue and cracks in aircraft frames, and scanners used for screening baggage at airports. All of these rely on feisty photons for their hard work.
However, one thing that is in common with all these devices is that they also rely on an array of sensors, usually positioned on the other side of scanned objects, to work. Radiation that is absorbed inside the scanned objects or scattered away is deducted from the input flux, allowing complex computer algorithms to map detailed 2D and 3D imagery of scanned objects.
Superman does not have any external sensors to help him with his X-ray scans, which means he uses a different method altogether. As to how dangerous it is, we will discover soon.
Superman must use his X-ray vision in highly focused pulses. To be able to see what is actually hidden behind any which layer of scanner objects, he needs to have some of his X-ray photons bounce back and be collected in his X-ray sensitive retinae. Since the radiation density is inversely proportional to the square of distance, most of Superman's photons are scattered away before reaching the desired objects, many more are scattered away by the object's geometry and even those that bounce back at Superman are partially lost traveling back to his retinae.
Without getting too technical, Superman must make a very cunning use of his eyes. First, he probably fires monochromatic pulses, starting with high-energy beams. These photons normally bounce off their target, without getting absorbed. This is similar to the new scanners used at airports, which allow you to see people skin-deep. Next, Superman must use lower-energy X-rays, which will penetrate the scanned target and illuminate what is beyond. But then, they must travel back.
This means many things:
Some radiation is absorbed in the intermediate object, especially the low-energy photons. This is a known risk in CT scanners, where the low-energy absorption accounts for most damage to the patients. A kind of a paradox really. While the medical industry strives to reduce dosage, going for lower energy settings in the X-ray cannon are sometimes more detrimental to the health of the patient than using higher energies.
The portion of radiation that traverses the scanned object is much reduced in its energy. Now it bounced back off the intended target and travels back to Superman. The intermediate object experiences yet another bombardment of photons, this time composed of an ever broader spectrum of low energy photons that get absorbed inside it. In case of the human body, we have photon damage to cells and DNA, causing ionization.

Superman gets a partial image of the secondary target, so he must use even more of his vision to obtain more data and form the correct picture. Compared to X-ray scanners, which need traverse the body only once, Superman's soldiers need do that twice, at the very least. In technical terms, this means at least four times more radiation, and possibly much more, because the distances Superman uses and the unpredicted scattering are far less ideal than industry setups.
With scattering and parasitic absorption tolling 99% of all radiation used in medical appliances, Superman's efficiency is probably several orders of magnitude lower, especially if the targets are moving or located in crowded places, with multiple objects and/or humans as random obstacles.
I've done some basic calculations, assuming Superman wanted to see what's in Lois Lane's purse from about three meters away while cozily chatting in a side lane in New York city, with the concrete buildings as lovely scattering dishes, he probably irradiates her each time with the effective dose of about 20mSv, which is equivalent to at least a full Neonatal abdominal CT, or about ten years of average annual background exposure!
Lois Lane, say bye bye to your Fallopian tubes. Superman just fried your ovaries.
Speaking of ovaries ...

Superman is deadly to those he makes love to

A shooting range target after being hit by Superman's adamant DNA particles; Lois Lane stands no chance
Again, I can't take credit. You need to read the perfect masterpiece called Men of Steel, Women of Kleenex, by Larry Niven, in which the author elaborates on Superman's lovemaking fiasco.
Assuming Superman is not a hermaphrodite or completely disinterested in sex, which does not seem to be case, since he uses his perverticles to ogle Lois and irradiate her every which way, Superman has or will have attempted coitus in some way. Which would probably lead to multiple fatalities. Just read the article above.

Superman might cause black holes

With his frivolous misuse of physics, including the rather inane trick of spinning Earth counterclockwise or whatever in the first movie, which somehow miraculously reversed time, go figure, Superman is far more likely to cause black holes in the Solar System than LHC is.
Superman seems to be faster than anything else, which means he can probably attain the speed of light. And with mass growing as the speed of light limit is approached, Superman could very well cause cataclysmic disturbances in the fabric of the Universe. Imagine a 85kg man zooming by at 300,000kps, only he does not weight 85kg any longer but an infinity more.
Such a comet would be a living blackhole, sucking everything in its wake after it, including the better part of our Solar system. In fact, Superman would probably disrupt the entire Galaxy with any speed-of-light tricks. The reason for this is very simple: as an active, roaming black hole, he would gather so much mass that when he finally decelerated and let this mass escape his exponentially growing Schwarzschild radius, he would have delivered countless trillions of tons of stellar mass at near light-speed to a new corner of the Galaxy, disrupting its mass, angular momentum and whatnot. In fact, even a single instance of Superman going ballistic in the vicinity of our planet would cause such a powerful yet instant gravitation pull, we would be all dead instantly.

Unanswered questions

No one has answers for these, but what happens when Superman sneezes or farts? Do we have a hail of plasma-quality droplets stripping oxygen molecules from the atmosphere or just a bad case of Kryptonite phlegm? Do we see spectacular cabbage-flavored earthquakes caused by colonic-tectonic rumbles in Superman's digestion system or we just get a bit of cabbage-flavored wind?

Conclusion

As you can see, Superman IS a menace to our society. With his X-ray vision, Superman is likely to cause numerous, severe cases of cancer and radiation sickness with unsuspecting populace. And with his uncaring rescues, you're minced meat! Finally, he can probably destroy the entire Universe, by mistake.
Avoid Superman if you can!
Cheers.

Funnier articles on: http://www.dedoimedo.com/physics/superman.html (Keep surfing)
A few weeks ago I had a heated discussion with a bunch of Python and Open Source people at a local meet-up about the way Python's path joining works. I was always pretty sure that people are aware of how the path joining works and why it works that way. However a bit of searching around on the internet quickly showed that it's actually a pretty common mistake to use the os.path.join function with arbitrary and unfiltered input, leading to security issues. Because the most common case where user input comes from another system is web development I went a bit further and tried to find a few other cases where people might be blindly trusting an API or operating system.
So here it is: my list of things not to do when doing Python web development.

Untrusted Data and File Systems

Unless you are running on a virtualized filesystem like when you are executing code on Google Appengine, chances are, vital files can be accessed with the rights your application has. Very few deployments actually reduce the rights of the executing user account to a level where it would become save to blindly trust user submitted filenames. Because it typically isn't, you have to think about that.
In PHP land this is common knowledge by now because many people write innocent looking code like this:

<?php

include "header.php";
$page = isset($_GET['page']) ? $_GET['page'] : 'index';
$filename = $page . '.php';
if (file_exists($filename))
    include $filename;
else
    include "missing_page.php";
include "footer.php";

Now the problem is that if you accept the filename blindly one could just pass a string with some leading “go one layer up” markers and access files somewhere else on the file system. Now many people thought that wouldn't be a problem because the file has to end with “.php” so only PHP files can be accessed. Turns out that PHP never (at least not until recently) removed nullbytes from the string before opening the file. Thus the underlying C function that opened the file stopped reading at the null byte. So if one attacker would access the page ?page=../../../../htpasswd he would see the contents of the passwd file.
Python programmers apparently don't care too much about this problem because Python's file opening functions don't have this problem and reading files from the filesystem is a very uncommon thing to do anyways. However in the few situations where people do work with the filenames, always always will you find code like this:

def upload_file(file):
    destination_file = os.path.join(UPLOAD_FOLDER, file.filename)
    with open(destination_file, 'wb') as f:
        copy_fd(file, f)

The problem there is that you expect os.path.join never to go a folder up. While in fact, that's exactly what os.path.join is capable of doing:

>>> import os
>>> os.path.join('/var/www/uploads', '../foo')
'/var/www/uploads/../foo'
>>> os.path.join('/var/www/uploads', '/foo')
'/foo'

While in this case the attacker is “just” able to overwrite files anywhere on the filesystem where the user has access (might be able to override your code and inject code that way!) it's not uncommon to read files on the filesystem as well and expose information that way.
So yes, os.path.join is totally not safe to use in a web context. Various libraries have ways that help you deal with this problem. Werkzeug for instance has a function called secure_filename that will strip any path separators from the file, slashes, even remove non-ASCII characters from the path as character sets and filesystems are immensly tricky. At the very least you should do this:

import os, re

_split = re.compile(r'[\0%s]' % re.escape(''.join(
    [os.path.sep, os.path.altsep or ''])))

def secure_filename(path):
    return _split.sub('', path)

This will remove any slashes and null bytes from the filename. Why also remove the Null byte if Python does not have a problem with that? Because Python might not, but your code. A nullbyte in the filename will trigger a TypeError which very few people are expecting:

>>> open('\0')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: file() argument 1 must be encoded string without NULL bytes, not str

On Windows you furthermore have to make sure people are not naming their files after device files, but that is outside of the scope of this post. If you are curious, check how Werkzeug does it.
If you actually do want to allow slashes in the filename there are a couple of things you have to consider. On POSIX the whole system is incredible easy: if it starts with a trailing slash or the combination of ../ it will or might try to reference a file outside of the folder you want the file to be in. That's easy to prevent:

import posixpath

def is_secure_path(path):
    path = posixpath.normpath(path)
    return not path.startswith(('/', '../'))

On windows the whole situation is more tricky (and I fell into that trap a few days ago as well). First of all you have backslashes you have to consider. Technically you would also have to consider colons on Mac OS, but there are very few people that still aim for Mac OS compatibility. Thus the backslash is the main culprit. Secondly you can't just test for absolute paths by checking if the path starts with a slash. On windows there are multiple different kinds of absolute paths: regular Unix absolute paths and secondly absolute paths that also include a drive letter. Thankfully the path module provides ways to reliably check if the path is absolute.
The following function checks if paths will not manage to escaped a folder on POSIX and Windows:

import os

non_slash_sep = [sep for sep in (os.path.sep, os.path.altsep)
                 if sep not in (None, '/')]

def is_in_folder(filename):
    filename = os.path.normpath(filename)
    for sep in non_slash_seps:
        if sep in filename:
            return False
    return os.path.isabs(filename) or filename.startswith('../')

The idea is that we consider the filenames to be in posix notation and that the operating system is fine with filenames containing slashes. That is the case for all operating systems you would care about these days. Then if the native operating system path separator is in the string we can assume it's not a valid character for a filename on the web anyways and consider it unsafe. Once that passed we make sure the path is not absolute or does not start with the special ../ string that indicates going to a higher level on both Windows and POSIX.
Generally speaking though, if you do aim for windows compatibility you have to be extra careful because Windows has its special device files in every folder on the filesystem for DOS compatibility. Writing to those might be problematic and could be abused for denial of service attacks.

Mixing up Data with Markup

This is a topic that always makes me cringe inside. I know it's very common and many don't see the issue with it but it's the root of a whole bunch of problems and unmaintainable code. Let's say you have some data. That data for all practical purposes will be a string of some arbitrary maximum length and that string will be of a certain format. Let's say it's prosaic text and we want to preserve newlines but collapse all other whitespace to a single space.
A very common pattern.
However that data is usually displayed on a website in the context of HTML, so someone will surely bring up the great idea to escape the input text and convert newlines to <br> before feeding the data into the database. Don't do this!
There are a bunch of reasons for this but the most important one is called “context”. Web applications these days are getting more and more complex, mainly due to the concept of APIs. A lot of the functionality of the website that was previously only avaiable in an HTML form is now also available as RESTful interfaces speaking some other format such as JSON.
The context of a rendered text in your web application will most likely be “HTML”. In that context, <br> makes a lot of sense. But what if your transport format is JSON and the client on the other side is not (directly) rendering into HTML? This is the case for twitter clients for instance. Yet someone at Twitter decided that the string with the application name that is attached to each tweet should be in HTML. When I wrote my first JavaScript client for that API I was parsing that HTML with jQuery and fetching the application name as a string because I was only interested in that. Annoying. However even worse: someone found out a while later that this particular field could actually be used to emit arbitrary HTML. A major security disaster.
The other problem is if you have to reverse the stuff again. If you want to be able to edit that text again you would have to unescape it, reproduce the original newlines etc.
So there should be a very, very simple rule (and it's actually really simple): store the data as it comes in. Don't flip a single bit! (The only acceptable conversion before storing stuff in the database might be Unicode normalization)
When you have to display your stored information: provide a function that does that for you. If you fear that this could become a bottleneck: memcache it or have a second column in your database with the rendered information if you absolutely must. But never, ever let the HTML formatted version be the only thing you have in your database. And certainly never expose HTML strings over your API if all you want to do is to transmit text.
Every time I get a notification on my mobile phone from a certain notification service where the message would contain an umlaut the information arrives here completely broken. Turns out that one service assumes that HTML escaped information is to be transmitted, then however the other service only allows a few HTML escaped characters and completely freaks out when you substitute “ä” with “&auml;”. If you ever are in the situation where you have to think about “is this plain text that is HTML escaped or just plain text” you are in deep troubles already.

Spending too much Time with the Choice of Framework

This should probably go to the top. If you have a small application (say less than 10.000 lines of code) the framework probably isn't your problem anyways. And if you have more code than that, it's still not that hard to switch systems when you really have to. In fact even switching out core components like an ORM is possible and achievable if you write a little shim and get rid of that step by step. Better spend your time making the system better. The framework choice used to be a lot harder when the systems were incompatible. But this clearly no longer is the case.
In fact, combine this with the next topic.

Building Monolithic Systems

We are living in an agile world. Some systems become deprecated before they are even finished :) In such an agile world new technologies are introduced at such a high speed that your favorite platform might not support it yet.
As web developers we have the huge advantage that we have a nice protocol to separate systems: it's called HTTP and the base of all we do. Why not leverage that even further? Write small services that speak HTTP and bridge them together with another application. If that does not scale, put a load balancer between individual components. This has the nice side effect that each part of the system can be implemented in a different system. If Python does not have the library you need or does not have the performance: write a part of the System in Ruby/Java or whatever comes to mind.
But don't forget to still make it easy to deploy that system and put another machine in. If you end up with ten different programming languages with different runtime environments you are quickly making the life of your system administrator hell.

Stolen from: http://lucumr.pocoo.org/2010/12/24/common-mistakes-as-web-developer/

Introduction

Recently I switched from using bash to zsh as my main shell. I'd heard a lot of good things about it (and how complex it is woah) so I decided to try it out for myself, the main reason I decided to try it was because it was already installed on my system (Mac OS X) and it has emacs key bindings. What i found was "the missing shell" in a lot of ways interactive mode is similar to bash, which is great, I can easily fit into it and feel comfortable but it has a lot of things that extend on bash such as completion, globbing and customisation. Next we'll look at some of the most important aspects of zsh that I found in my short experience with it, and some examples which might motivate you into using it yourself. NOTE: I have purposely left out the scripting side of zsh because I haven't had much experience with it yet and I have about the same amount of experience with bash scripting.

Why use zsh?

As mentioned before, here's some of the most important aspects of zsh for me.

Expressive

One aspect of zsh that first stood out for me was its extensive globbing capabilities, for example:
ls -d ^*.zsh 
This will display all files except (^) files with the .zsh extension.
ls *.zsh~i*
This will list all files with the .zsh extension except files beginning with the letter 'i'.
ls *.(zsh|rb|)
This is grouping and will print files with zsh and rb extensions.
ls **/*zsh
Will output:
configs/aliases.zsh
configs/bindings.zsh
configs/completion.zsh 
configs/exports.zsh
configs/prompts.zsh
init.zsh
This globs the directories with the asterisk for zsh files.
To enable these features you'll need setopt extendedglob in your config.

Completion

Zsh's completion is more advanced than what I was used to with bash, with zsh you can get suggestions in a menu that you can browse with the arrow keys, more intelligent context aware suggestions, and more.
Here's an example of context aware suggestions:
kill <TAB>
With this you'll get a list of running processes.
When you're browsing a directory you also get more information about what the file is, for example:
ls ~/.z <TAB>
And you'll get what you expect, but if you have symlinked files they'll look like this:
.zshrc@
The @ symbol denotes a symlink. This is only a small feature but it's nice and you don't need to ls -l.
There is also support for remote completion!

Prompts

You can have multiline prompts which can have many features attached to it like battery charge and load. see this blog for a good example. The prompt is at the heart of zsh customisation capabilities, you can pretty much configure it to look anyway you want and there are plenty of examples out there ranging from simple to mad!

alias -g alias -s

With zsh you can have a file for global (-g) aliases and suffix (-s) aliases, for example:
alias -s pdf=xpdf
So now if you execute a single file ending in pdf is will be re-written to xpdf foo.pdf.
Global aliases are expandable anywhere on the command line, not just the beginning. Global aliases can be dangerous if something gets expanded that shouldn't.

Easy setup

To get a reasonable config it only takes about 4 lines of code:
autoload -U compinit promptinit        
compinit
promptinit        
prompt walters
With that you get a tab completion (compinit) and coloured prompt (promptinit). You can see the list of built in prompts with:
prompt -l
You can also add a prompt to your config with:
export PS1="$(print '%{\e[1;34m%}%n%{\e[0m%}'):
$(print '%{\e[0;34m%}%~%{\e[0m%}') → 
This can all go into your config file at ~/.zshrc. As zsh is fully customisable I prefer to split up my files (see a link to my config at the end of this article). With this you're ready to go with some of the best features of zsh to play with, and as you grow your config will grow with you.

Emacs

Being able to use one set of keybinding across applications is handy and as an emacs user I was pleased to find out zsh supports emacs keybinding out the box by default (Vim users can set the $EDITOR variable in their config), so normal navigation rules apply!

Conclusion

I've only scratched the surface here of what you can do with zsh and how it can make your terminal life a little bit easier. You should give zsh a try it's expressive, very powerful and fits nicely into a programmers' toolbox. If you want somewhere to get started you can take a look at my configs or if you're a git user you can clone it with:
 git clone git@github.com:jbw/zsh.git 
Happy tweaking!

References

C++ Streams & Typedefs: Be Charful

The C++ typedef keyword is indispensable in many situations, especially for writing portable low-level code. However, in some circumstances it can cause trouble, particularly when it comes to function overloading. Consider the following C++ template class:
template <typename T>
struct foobar
{
    foobar( const T foo ) : foo_( foo ) {}
    T foo_;
};
One might want to write a simple stream output operator to format the template class’ member values, e.g. for debugging purposes:
template <typename T>
ostream& operator<<( ostream& s, const foobar<T>& fb )
{
    return s << "foo: " << fb.foo_;
}
This seems reasonable. Now, let’s assume that this template is going to be used in a context where T will be one of several fixed-width integer types. These are usually typedefs from a header like stdint.h (for those that don’t mind including a C header) or boost/cstdint.hpp (to be a C++ purist). They are commonly named int64_t, int32_t, int16_t, and int8_t, where the X in intX_t specifies the number of bits used to represent the integer. There are also unsigned variants, but we’ll ignore those for this discussion.

Let’s now explore what happens when we initialize a foobar<intX_t> instance with its foo_ member set to a small integer and print it to standard output via our custom stream output operator:
cout << foobar<int64_t>( 42 ) << endl;
cout << foobar<int32_t>( 42 ) << endl;
cout << foobar<int16_t>( 42 ) << endl;
Each of these statements prints “foo: 42″, as expected. Great, everything works! But wait, there was one type that we didn’t test:
cout << foobar<int8_t>( 42 ) << endl; 
 This prints “foo: *” instead of “foo: 42″. This is probably not the expected result of printing the value of an int8_t. After all, it looks and feels just like all of the other intX_t types! What causes it to be printed differently from the other types? Let’s look at how the integer types might be defined for an x86 machine:
typedef long int int64_t;
typedef int int32_t;
typedef short int16_t;
typedef char int8_t;
The problem is that the only way to represent an integer with exactly 8 bits (and no more) is with a char (at least on the x86 architecture). While a char is an integer, it is also a… character. So, this trouble is caused by the fact that the char type is trying to be two things at once. A simple (but incorrect) approach to work around this is to overload1 the stream output operator for the int8_t type, and force it to be printed as a number:
// This is incorrect:
ostream& operator<<( ostream& s, const int8_t i )
{
return s << static_cast<int>( i );
}

The problem with this approach is that the int8_t typedef does not represent a unique type. The typedef keyword is named poorly; it does not introduce new types. Rather, it creates aliases for existing types. By overloading the stream output operator for the int8_t type, the char type’s operator is being overloaded as well. Since the standard library already defines a stream output operator for the char type, the above definition would violate the One Definition Rule and result in a compiler error. Even if it did compile, the results of redefining the way characters are printed would probably not be desirable.

An alternative (working) solution to the problem is to overload the output stream operator for the foobar<int8_t> type:

ostream& operator<<( ostream& s, const foobar<int8_t>& fb )
{
    return s << "foo: " << static_cast<int>( fb.foo_ );
}
This definition does not clash with any existing overloads from the standard library, and it effectively causes the int8_t to be printed as an integer. The downside is that it will cause unexpected behavior when a foobar<char> is printed, if the programmer intends char to represent a character. The only way to avoid this would be to define int8_t as a class instead of making it a typedef, and providing a well-behaved stream output operator for that class. The class’ arithmetic operators could be overloaded to make it look almost exactly like a POD integer, and it wouldn’t necessarily take up any extra memory. However, this solution is still not ideal, because classes behave differently than POD types in subtle ways (e.g. POD types are not initialized by default, but classes are).

If there’s anything to take away from this, it’s that the C++ char type is an odd beast to watch out for. Also, the name of the typedef operator could use some improvement…

To subscribe to the "Guy WhoSteals" feed, click here.
You can add yourself to the GuyWhoSteals fanpage on Facebook or follow GuyWhoSteals on Twitter.

Apple vs GNU+Linux



People complaining about lack of Linux support should try to install MacOS on anything apart from Mac hardware. Unless you have a cracked version, it's not even possible (and good luck finding/trusting new ones!) Even with hacked versions, try to find driver support for all your hardware. Linux is worlds ahead when you want to choose your hardware. If you don't feel like shelling out a thousand or more bucks every time a computer dies, using an older computer, or making a special-purpose computer for a friend without taking out a loan, the choice is clear. If you want to be guaranteed the same compatibility you get with Apple's OS, then you can still use Linux, pre-installed in a laptop from Dell, etc. Wow, it just works! Magic! :P

Linux (especially Ubuntu, lately), has become a definite advantage to people who want to choose their hardware, and it gets better with every release.

With Apple, unfortunately, you buy more than just hardware and an OS, you've officially bought a style, image, and a glowing apple to make sure everyone knows that the computer is far more important than whatever you happen to be doing on it. Apple's marketing machine makes sure that every Mac user is a walking/talking advertisement for it. That's good for Apple, but that's a lot of pretentious corporate overhead for me. I don't want to buy an image. I don't want endless throngs of consumer drones commenting "Wow, is that the new Apple whatsit? I want one of those!" People don't care what you're doing. It's all about that Mac with the glowy apple logo. You might as well not be there. I'd much rather be using Blender, GIMP, Photoshop, Open Office, and be appreciated for what I'm doing, rather than having endless queries about Apple's latest gadget. I do a lot of cool stuff, so it helps (especially for my business) to have people's focus on the artwork, and not the laptop.

People will even attribute your success to Apple's hardware. "Well, I could be good too, if I had one!" Of course, disillusionment follows, after having shelling out the money for one; No, no, you can't be good just for buying an overpriced machine with go-faster stripes. It takes a lot of hard work! Endless hours of practice, and honing your skills to be good, even with Apple's glory-glory ware! They will happily take credit for your work. And people merrily give it to them. "Yeah.. huh huh. I use a Mac! Macs are BETTER for graphic design!" Yea? Try graphic design without Adobe's software. You realise Apple doesn't make that, right? Try using iPhoto. Then you see how much better Apple is at graphic design! Better yet, try and find where that crapware stored the file you're working on? Thousands of high-res photos/projects? Notice how iPhoto takes 4 minutes to load? Yep. Better!

I know a whole company of graphic designers, all on Mac, all very very embarrassed to be walking Apple stereotypes, and just waiting for something else to come along that runs the latest version of Adobe's Creative Suite. Unfortunately, only available for Macs and Windows PCs. (Adobe's fault, not Linux).

I'm a professional graphic designer (with 12 years of industry experience). I choose Linux, and if Photoshop is a must (it is sometimes), I use WINE. If the latest version of the Creative Suite is a must (Often it is for the printing/publishing industry to deal with), I'll run a VM with Windows XP.
Or maybe I could run OSX in a VM? Oh wait... no, no I can't. Apple's suing machine won't allow an of the VM developers to make that possible. Okay. Well... guess I have NO USE for OSX then... at all! :D

I do this all from my laptop/Linux workstation, which I paid like $400.00 for, from a guy selling it to buy an iPad. Lol. I guess there IS some benefit to Apple's advertising machine after all! :D I haven't needed a desktop machine, (or a laptop that costs over $600) for anything in over a decade.

Moreover, I have a full install of Ubuntu 10.10 installation on a USB keychain, so if my hard drive fails, I can boot off the keychain and keep working from my backed up files (using Ubuntu One's free cloud) while I order a new one, without missing a beat. I don't even have to go home from the coffee shop! Magic! :D

I'm sure when Apple does the same (or strikingly similar) thing, they will call it the best innovation ever, and praise themselves for being ahead of the game in all things. "All others can only follow", and other rabid fanboy rubbish. So when that happens (again), know that the better informed world is laughing at everyone who adopts idiotic mantras like that. To those people, we're happy to tell you your wrong, explain clearly why, then watch you snort, put your nose up in the air and continue to pretend your Mac makes you more sophisticated, efficient, and cool.

We'll smile, and continue using Linux, and reaping the benefits, making it better and easier to use for the "average person" (like it's not already? Try Ubuntu 10.10), and enjoying some of the best eye-candy interfaces out there, for free (thanks Compiz folks!).

To end this bloody dissertation (TLDR)
I'd like to extend thanks to the Linux community (including Canonical) for making supremely good software that everyone can use free of charge, and continuing to make Linux better, and easier to use for everyone.

And remember, kids: It doesn't matter what Apple fanboys "think". You could slap an Apple logo on anything and they'd take out a second mortgage just to have it. If you listen to that kind of opinion, you certainly deserve to be excluded from the joys of Linux. :)

To subscribe to the "Guy WhoSteals" feed, click here.
You can add yourself to the GuyWhoSteals fanpage on Facebook or follow GuyWhoSteals on Twitter.
A lot of people are buzzing about Apple's Mac App Store, but I'm nonplussed. I've had the same features on Linux since the late 90's.
Granted, I'm being a little snarky — but only a little. Apple's App Store for the iPhone was a big deal because, before Apple, the application landscape for mobile phones was not that rosy. Apple simplified getting applications on the phone without having to deal directly with the carriers — so some credit is due there. They've also raised the bar in terms of what developers are shooting for for mobile devices, so kudos to Apple for that.
But the buzz over the Apple Mac App Store? Meh. Look at the features that Apple touts:
  • Install any app with ease
  • Keep your apps up to date
  • The app you need. When you need it
  • Buy, download, and even redownload
Linux folks, sound familiar? We've had all of this, modulo "buy", for a decade at least. The Advanced Package Tool, a.k.a. "APT" for Debian-based systems (that includes Ubuntu), has made all of this possible for years and years. Granted, this has primarily focused on free and open source software, but paid apps are possible too. The Ubuntu folks have had a paid software store since Ubuntu 10.10. (It is, I admit, sparsely populated when it comes to proprietary/paid software.)
But the installation, updating, and such? All very possible with APT — or Yum or Zypper, if you happen to be using an RPM-based distro. (Or APT for RPM, if that's still being maintained.)
Apple brags about having more than 1,000 apps available at launch... Ubuntu users can find 32,000-plus packages in the software repository for Ubuntu 10.10. Now, a bunch of those packages are not end-user applications — this includes things like libraries, system utilities, fonts, and so forth. But you could easily find 5,000 end user apps, many of which are competitive with the proprietary stuff being offered through the Apple Mac App Store. Oh, and free. Free as in cost, and all open source. (Not all Free by definition of the Free Software Foundation, though, but that's another topic entirely.)
Of course, what Apple has done that's unique shows Linux folks what we need to be better at doing: marketing, developer and ISV relations, and standardization. Lest you think I'm only hear to praise Linux or kick Apple, I'm not. Linux has had the raw tools to do this for a decade, but the communities and companies behind Linux have yet to gain enough momentum to pull this off on the desktop. Or the will to chuck tribal differences between desktops, toolkits, etc. and unify on one damn stack to attract the kind of developers that are filling up Apple's App Store. Canonical, bless their hearts, are trying — but it's unclear as of yet whether Canonical has enough pull to rally enough developers and inspire enough ISVs to drive even 100 paid desktop apps to Linux, much less 1,000.
The Linux community should get some credit here, though. What has been hard for the users of arguably the easiest operating system to use, has been easy for Linux users for years. A quick "apt-get update" and my entire system is updated, apps and all. A quick "apt-get install" and I can have everything from the Banshee media player to the latest Chrome release. Typing is not required, of course. Each distribution has GUI tools that make it very easy to install and manage applications.
And, it's important to add — I can do all this without the blessing of any single company. You see, while Apple controls everything that goes into the App store, nobody controls what users add to their APT, Yum, or Zypper repos.
So Linux users have had the tools and freedom, just a severe lack of marketing and developer relations smarts. That includes failing to have a single dominant toolchain (GUI toolkit, etc.) for companies to target. Seems that Nokia (with Qt) might be on to something here, though. It's pretty clear what the overall Linux community and vendors need to address, just a question if they do and if it's not too little and too late for any mainstream traction.
I do hope others in the tech press will at least, in passing, note that Apple has not invented something new with its App Store — merely taken an old idea and run with it better than the competition. Which, come to think of it, seems to be the company's specialty.

To subscribe to the "Guy WhoSteals" feed, click here.
You can add yourself to the GuyWhoSteals fanpage on Facebook or follow GuyWhoSteals on Twitter.
This summary is not available. Please click here to view the post.

Publish Source code in Blogger

After posting any source code, First we make sure that the code is correct and same as the original. For HTML or script source code, we require HTML Encoding so that blogger doesn't execute this code.The Source code format should be different from content and there should be option for copy code or print code so that user don't need to select entire code and press ctrl+c. I use "Syntaxhighligher" to display source code in blogger. It fullfills all requirement. First Layout is very pretty with line numbers and it provides "View Plain" (To display only source code in another window),"Copy to clipboard" and "Print?" options. We will see How can we use syntaxhighlighter in blogger without any hosting.

1.In the blogger,Click on Layout tab ->Edit HTML and put following things Before </head>

<link href='http://syntaxhighlighter.googlecode.com/svn/trunk/Styles/SyntaxHighlighter.css' rel='stylesheet' type='text/css'/>
<script language='javascript' src='http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shCore.js'/>
<script language='javascript' src='http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushCpp.js'/>


2. put following things Before </body>

<script language="javascript">
dp.SyntaxHighlighter.BloggerMode();
dp.SyntaxHighlighter.HighlightAll('code');
</script>


3. encode (HTML Encode) your source code. for this, you can use this.
OR

Copy your code in notepad and replace all < in &lt; etc.

4. Put your updated code between:
<pre name="code" class="Cpp">
….My code here…
</pre>

To subscribe to the "Guy WhoSteals" feed, click here.
Shamelessly stolen from: http://urenjoy.blogspot.com/2008/10/publish-source-code-in-blogger.html
You can add yourself to the GuyWhoSteals fanpage on Facebook or follow GuyWhoSteals on Twitter.

“And I always find, yeah, I always find somethin’ wrong
You been puttin’ up with my sh*! just way too long
I’m so gifted at findin’ what I don’t like the most
So I think it’s time for us to have a toast

Let’s have a toast for the douchebags,
Let’s have a toast for the a**holes,
Let’s have a toast for the scumbags,
Every one of them that I know
Let’s have a toast for the jerkoffs
That’ll never take work off
Baby, I got a plan
Run away fast as you can”
–Kanye West

In high tech, intelligence is always a critical element in any employee, because what we do is difficult and complex and the competitors are filled with extremely smart people. However, intelligence is not the only important quality. Being effective in a company also means working hard, being reliable, and being an excellent member of the team.

When I was a CEO, this was one of the most difficult lessons for me to learn. I felt that it was my job to create an environment where brilliant people of all backgrounds, personality types, and work styles would thrive. And I was right. That was my job. Companies where people with diverse backgrounds and work-styles can succeed have significant advantages in recruiting and retaining top talent over those that don’t. Still, you can take it too far. And I did.

Here are three examples of the smartest people in the company being the worst employees.

Example 1: The Heretic
Any sizable company produces some number of strategies, projects, processes, promotions, and other activities that don’t make sense. No large organization achieves perfection. As a result, a company needs lots of smart, super engaged employees who can identify its particular weaknesses and help it improve them.

However, sometimes really smart employees develop agendas other than improving the company. Rather than identifying weaknesses, so that he can fix them, he looks for faults to build his case. Specifically, he builds his case that the company is hopeless and run by a bunch of morons. The smarter the employee, the more destructive this type of behavior can be. Simply put, it takes a really smart person to be maximally destructive, because otherwise nobody else will listen to him.

Why would a smart person try to destroy the company that he works for? There are actually many reasons. Here are few:

1. He is disempowered—She feels that she cannot access the people in charge and, as a result, complaining is her only vehicle to get the truth out.

2. He is fundamentally a rebel—She will not be happy unless she is rebelling; this can be a deep personality trait. Sometimes these people actually make better CEOs than employees.

3. He is immature and naïve—She cannot comprehend that the people running the company do not know every minute detail of the operation and therefore they are complicit in everything that’s broken.

Often, it’s very difficult to turn these kinds of cases around. Once an employee takes a public stance, the social pressure for him to be consistent is enormous. If he tells 50 of his closest friends that the CEO is the stupidest person on the planet, then reversing that position will cost him a great amount of credibility the next time he complains. Most people are not willing to take the credibility hit.

Example 2: The Flake
Some brilliant people can be totally unreliable. At Opsware, we once hired an unequivocal genius—I’ll call him Roger (not his real name). Roger was an engineer in an area of the product where a typical new hire would take 3 months to become fully productive. Roger came fully up to speed in two days. On his third day, we gave him a project that was scheduled to take one month. Roger completed the project in 3 days with nearly flawless quality. More specifically, he completed the project in 72 hours. 72 non-stop hours: No stops, no sleep, no nothing but coding. In his first quarter on the job, he was the best employee that we had and we immediately promoted him.

Then Roger changed. He would miss days of work without calling in. Then he would miss weeks of work. When he finally showed up, he apologized profusely, but the behavior didn’t stop. His work product also degraded. He became sloppy and unfocused. I could not understand how such a stellar employee could go so haywire. His manager wanted to fire him, because the team could no longer count on Roger for anything. I resisted. I knew that the genius was still in him and I wanted us to find it. We never did. It turns out that Roger was bi-polar and had two significant drug problems: 1. He did not like taking his bi-polar medication and 2. He was addicted to cocaine. Ultimately, we had to fire Roger, but even now, it pains me to think about what might have been.

One need not be bi-polar to be a flake, but flakey behavior often has a seriously problematic root cause. Causes range from self-destructive streaks to drug habits to moonlighting for other employers. A company is a team effort and, no matter how high an employee’s potential, you cannot get value from him unless he does his work in a manner in which he can be relied upon.

Example 3: The Jerk
This particular smart-bad-employee type can occur anywhere in the organization, but is most destructive at the executive level. Most executives can be pricks, dicks, a-holes or a variety of other profane adjectives at times. Being dramatically impolite can be used to improve clarity or emphasize an important lesson. That’s not the behavior that I am talking about.

When used consistently, asinine behavior can be crippling. As a company grows, its biggest challenge always becomes communication. Keeping a huge number of people on the same page executing the same goals is never easy. If a member of your staff is a raging jerk, it may be impossible. Some people are so belligerent in their communication style that people just stop talking when they are in the room. If every time anyone brings up an issue with the marketing organization, the VP of marketing jumps down their throat, then guess what topic will never come up? This behavior can become so bad that nobody brings up any topic when the jerk is in the room. As a result, communication across the executive staff breaks down and the entire company slowly degenerates. Note that this only happens if the jerk in question is unquestionably brilliant. Otherwise, nobody will care when she attacks them. The bite only has impact if it comes from a big dog. If one of your big dogs destroys communication on your staff, you need to send her to the pound.

When do you hold the bus?
The great football coach John Madden was once asked whether or not he would tolerate a player like Terrell Owens on his team. Owens was both one of the most talented players in the game and one of the biggest jerks. Madden answered: “If you hold the bus for everyone on the team, then you’ll be so late that you’ll miss the game, so you can’t do that. The bus must leave on time. However, sometimes you’ll have a player that’s so good that you hold the bus for him, but only him.”

Phil Jackson, the basketball coach who has won the most NBA championships, was once asked about his famously flakey superstar Dennis Rodman: “Since Dennis Rodman is allowed to miss practice, does this mean other star players like Michael Jordan and Scottie Pippen can miss practice too?” Jackson replied: “Of course not. There is only room for one Dennis Rodman on this team. In fact, you really can only have a very few Dennis Rodmans in society as a whole; otherwise, we would degenerate into anarchy.”

You may find yourself with an employee who fits one of the above descriptions, but nonetheless makes a massive positive contribution to the company. You may decide that you will personally mitigate the employee’s negative attributes and keep them from polluting the overall company culture. That’s fine, but remember: you can only hold the bus for her.

To subscribe to the "Guy WhoSteals" feed, click here.
Shamelessly stolen from: http://blogs.forbes.com/bruceupbin/2011/01/03/when-smart-people-are-bad-employees/
You can add yourself to the GuyWhoSteals fanpage on Facebook or follow GuyWhoSteals on Twitter.
Related Posts Plugin for WordPress, Blogger...
top
Share