Monday, December 29, 2008

XML element order in WCF

Call me old-fashioned, but I’ve always held to the belief that in XML nesting is very important and order should never be important. I’ve always found it unbelievably stupid when some application depended on the order of elements in an XML document. To me XML is all about representing structured data, not about a rigorous document layout.

Imagine my surprise when I started testing my new WCF webservice with a request I wrote by hand. All of a sudden a member was left null no matter what I did. Because it was the last element in my hand-written request that remained null, my first though (after extensively verifying I didn’t make any typo’s) was that, maybe, for some obscure reason, the last member didn’t get deserialized correctly. So I put another member last and indeed it was also null. Then I put another member last and al of a sudden all where filled, except for the first one.

After a few hours of WTF?!! I found out that the ‘default’ serializer for WCF, the Microsoft favourite, the “Data Object Serializer” has a dependency on XML element order. After changing the serializer to the “XML serializer” (just remove the “DataContract” attribute from the classes and the “DataMember” attributes from the members of those classes and replace then with the “Serializable” atttribute and/or any of the “Xml*” attributes from the “System.Xml.Serialization” namespace and tack the “XmlSerializerFormat” attribute on the class that contains your WCF methods) it all just worked! Order wasn’t important anymore, as it should be.

What I don’t get is why Microsoft would promote the “Data Object Serializer” as the preferred methods for WCF. The “XML Serializer” is much more flexible if you want to control what the resulting XML document should look like and XML element order doesn’t matter! The only advantage to using the “Data Object Serializer” is that it doesn’t serialize members of classes that do not have the required attribute (opt-in instead of the opt-out method of the “XML Serializer”) and that you can serialize private members, which can come in handy from time to time. But it has a dependency on XML element order!!

Wednesday, December 3, 2008

Terribly busy

It's been a while since I last posted anything. I've been extremely busy at work. We're currently nearing the first milestone of a very ambitious project. I'm not a 100% sure we'll make the deadline, but we're doing the best we can. We're currently using WCF and Windows Workflow. While the WCF is very basic, just a webservice to accept requests to do stuff, the WF stuff is more interesting, so I hope to be able to post more about that later. Ccurrently I'm trying to implement a SOAP logger. We had one for our ASMX webservices, but since we use WCF now we can't re-use it. According to some blogs I found on the subject creating one is easier then it was for ASMX webservices. But I haven't got it running yet. As soon as I have, I'll detail it here.

Tuesday, September 2, 2008

LINQ is faster then it looks

I was recently working on a LINQ query. I needed to get some information from a database table and I needed to know if certain information regarding the information I just retrieved from that database table was available in one of two other database tables. I wasn't interested in the information itself, just whether or not there was information. Basically I needed to return some stuff and two boolean's.

I couldn't get it to work in one single query, so as always I turned to my colleague who knows a helluva lot more about database and SQL than me and asked him for help. Since he doesn't know LINQ he asked me the SQL that LINQ creates for the query so he could use that as a starting point. So I fired up LINQPad and generated the SQL.

The SQL LINQ produced for my particular query was a lot of selects. It was selects within selects within selects within selects with a "Case-Then" with selects. So, understandably, my colleague expressed his disgust with the SQL and then proceeded to give me a very lean-and-mean query that did exactly what I wanted. I updated my existing LINQ query to use some of the SQL stuff he used in creating the query and my LINQ query gave the exact results I wanted. I was happy, but my colleague wasn't.

My colleague wanted the final SQL as produced by the LINQ query that I had now, so I went into LINQPad again, generated the query and send it to him. This time LINQ had added another level of selects and after seeing the query my colleague went on a rant about how bad LINQ was and how much performance we'd lose if we'd use it. I wasn't worried. This particular project I was working on was not a high-profile application, but rather an internal tool for supporting some testers. So as long as I got an answer from the SQL Server within 1 second I was happy. My colleague wasn't. He wanted to know exactly hoe much performance was lost on this LINQ nonsense.

My colleague took the SQL query he handcrafted into perfection over 30 minutes time and took the bloated SQL Query that LINQ produced for the query I created in less then 5 minutes and ran them both through the SQL Server Profiler. Then his heart stopped. According to the SQL Server Profiler both queries took exactly the same time, in fact both queries resulted in exactly the same execution plan!

Again this was just a reminder of something I mentioned before: Don't guess about performance. Measure! And also, the SQL Server Query Optimizer is pretty bad-ass :-)

Wednesday, August 20, 2008

Problems with Visual Studio 2008 SP1

I installed VS2008 SP1 the week after it came out. I used the prepare tool first, then installed the service pack and everything seemed to work perfectly. I haven't really checked out all of the new features, but I really like the fact that all the TODOs in your solution now show up, in stead of only the TODOs in the files you currently have opened. And I also like the VB-esque background compiling (or however they implemented it) and showing you selected compiler errors before you compile. If you create a function that returns a value, but haven't put in the return statement yet, the IDE will already inform you that the function doesn't return a value.

So all was fine and perfect, until I tried to debug an ASP.NET application. Then the IDE would freeze and prompt me with a window that told be VS2008 had an internal error and must close. I could then close VS2008 or close it and look for a solution online. Whichever I would choose, the window would disappear, but VS2008 would remain, taking up 100% CPU on one core. The other core would be pwned by WerFault.exe and it would not end until I'd manually end-tasked DevEnv.exe.

I had my ASP.NET development set up to use IIS, so I switched so Cassini (the internal ASP.NET development server), but that didn't work. I then tried to run the application from IIS again and attach the debugger. Still hanging.
I then searched on Google for a bit and tried some work-arounds for similar problems I found there, I also tried the repair option in the VS2008 installer, but debugging still hung the IDE. So I reported it at Microsoft Connect and went home (I was at work at the time).

That evening at home I completely removed VS2008 and everything related from my laptop and reinstalled it. I verified debugging worked and when it did I installed SP1 again. But this time debugging kept working. So I have no idea what was wrong, but it's all working again now. So if anyone runs into similar problems, just uninstall and install again.

Saturday, August 16, 2008

Visual Studio 2008 Service Pack 1

VS2008 SP1 is out and you should download and install it. It's got major improvements. And it *finally* has the feature I requested somewhere around Visual Studio beta 2:

"In earlier versions of Visual Studio, the task list is populated by using the ToDo tasks that are specified in open files. In this service pack, the Visual C# IDE extends this functionality. The Visual C# IDE populates the task list by using the ToDo tasks that are specified in the whole solution."

Of course I'll have to see it before I believe it :-) I'm currently downloading the VS2008 SP1 ISO image.

Oh, and if you're going to install SP1, don't forget the run the Service Pack Preparation Tool first. This might safe you some hassle of conflicting hot fixes or betas.

Update: Don't forget the .NET 3.5 Service Pack 1!

Saturday, August 9, 2008

Open source is not more secure by default

This week a rather interesting announcement was made in light of the Black Hat security conference. Some researcher claimed to be able to bypass all of Windows Vista enhanced security features through Internet Explorer by using Active-X, Java or .NET. They also claimed that Microsoft couldn't do anything to fix it, because it abused core architectural assumptions made by Microsoft.

This is pretty disturbing, but I'm waiting for an official Microsoft response. I recall something similar in the past and then Microsoft was able to patch it up. I just can't for the life of me remember what that was, so maybe I'm just imagining things.

This message also spurred some interesting discussions on forums around the world. A lot of people are expressing their "expert" opinion on the matter. And on public forums everyone is a die-hard kernel developer of course. The most interesting discussion that came up again was the notion that open source is more secure, because everyone can look at the source and fix bugs.

I've always found this idea to be naive. First of all, just because everyone can look at the code, doesn't mean everyone will. I'm a programmer, but I never look at any of the source code provided by open source applications, unless I want to see how they implemented a certain feature so I can use it too somewhere. But sure, there are bound to be enthusiasts who will review the code. And they will find an occasional bug, but will they find all the security bugs? Of course not, why should they? Are they all software security experts? No. I think these "extra reviewers" are more likely to find typos then actual security bugs. I'd rather put my faith on static analysis tools to detect common errors and security researchers that try to break an application then a hobbyist security "guru".

I think the strength of open source lies in the fact that everyone interested can pitch in, but not because every user is a security reviewer.

Friday, August 1, 2008

SecuROM strikes again

I've posted about DRM before and I've mentioned SecuROM before. So today I was uninstalling a program that left a shortcut on my desktop. So I wanted to right-click on it and delete it. But when I right-clicked on it, Explorer crashed. Thinking it might be a fluke I tried again and Explorer crashed again. I figured that it might be the shortcut itself, so I selected it and pressed "delete", but...Explorer crashed again. So I opened a command prompt and deleted the shortcut from there. That worked.

Then I right-clicked another shortcut and Explorer crashed again. Clicked another shortcut and Explorer crashed. So I rebooted Windows to solve the problem, only it wasn't solved. Explorer still crashed if I right-clicked a shortcut.

I suspected it might have something to do with Shell Extensions. And when I googled the problem I found a program called ShellExView. This program shows you all the Shell Extensions that are currently loaded and let's you disable them. So I started the program, selected all the "Context Menu" type extensions and tried to disable them. Nothing happened. I figured it might be because I wasn't running the program as administrator.

So this was interesting. I couldn't run the program as administrator by right-clicking on it (that would just crash Explorer), the CTRL-Shift-Enter method of starting an elevated program would also cause Explorer to crash, so I had a problem. But then I remembered something about how Vista determines when to elevate a program even when it's not asked to. You see for installers you'd need to run them elevated or they wouldn't be able to install anything, but older installers won't have a manifest that Vista can use to determine they need elevation. I read on one of the many developer blogs I read, that Vista can also decide to elevate based on the name of the executable. So I renamed "shexview.exe" to "shexviewsetup.exe" and I got a nice elevation prompt.

So I disabled all the "Context Menu" extension and tried to right-click a shortcut. No crash. Great! But which extension caused the problem. I wanted to enable them all one-by-one when I saw an extension that I immediately suspected of being the culprit. The extension was called "CmdLineContextMenu Class" and it's description read: "SecuROM context menu for Explorer."

Why would I need a SecuROM extension installed? What does it do? And why was it installed without my consent? The only thing I installed right before I started noticing crashes was the new Space Siege demo. But why would you want a freely available demo to have copy protected? Oh wait, it's not the first time they've done that. But I already uninstalled Space Siege so why wasn't SecuROM removed? Why would companies want to leave a rootkit behind on my system? Then again, I'm not absolutely sure it came from the Space Siege demo (UPDATE: it didn't, see below), but it was the last thing I installed before Explorer started crashing. I tried looking in the installer files of the demo, but those are InstallShield cabinet files and I can't look into them.

Aside from being some nasty piece of DRM, the extension was installed in a very peculiar location. Not in "Program Files" or maybe the "Windows" directory like you'd expect. No, it was installed in the temp directory of my user profile! It's called "CmdLineExt.dll" and when I looked in the temp directory I also noticed the file "drm_dyndata_7370010.dll" of which the file details also mentioned being part of SecuROM.

Ofcourse the question is, why did it crash? Now I'm happy it did, otherwise I wouldn't have found out, but I'm still curious as to why it crashes Explorer whenever I right-click a shortcut.
I'm a big fan of Mark Russinovich's blog and his "The case of..." series of blog post. I have WinDBG installed, however I'm mainly a C# developer and have not done any real Win32 C++ programming for years and even when I did, it's wasn't at the low level that Mark understands. But, I did do some basic stuff with WinDBG.

I attached WinDBG to Explorer, I enabled the extension and I made Explorer crash. So now I'm in WinDBG and it's telling me Explorer has crashed. I run the !analyze command and I get the following feedback: "Probably caused by : heap_corruption ( heap_corruption!heap_corruption )".

I get the callstack and I see these as the last lines:

77b39790 04800fd8 00250000 ntdll!DbgBreakPoint
c0000374 77c4c030 000ee1ac ntdll!RtlReportCriticalFailure+0x2e
00000002 77b39754 00000000 ntdll!RtlpReportHeapFailure+0x21
00000008 00250000 04800fd8 ntdll!RtlpLogHeapFailure+0xa1
00250000 00000000 04800fe0 ntdll!RtlFreeHeap+0x60
04800fe0 000ee408 00000000 kernel32!GlobalFree+0x47
000ee240 00000001 04800fe0 ole32!ReleaseStgMedium+0x124
WARNING: Stack unwind information not available. Following frames may be wrong.
03d41cdc 00000000 0030db98 CmdLineExt!DllUnregisterServer+0x3c1c
02640e60 0030db98 00000004 SHELL32!HDXA_QueryContextMenu+0x1b5
055c2ee8 00f902ef 00000000 SHELL32!CDefFolderMenu::QueryContextMenu+0x38b

Looks about right, although I can't be absolutely sure since I don't have de debug symbols for the SecuROM DLL.

So looking at the stacktrace I get the idea the SecuROM extension is trying to free some object COM twice. I don't know what object and I don't know why, my low-level knowledge ends about here. I'm just happy I found the problem and make my system right-clickable again.

Oh and one interesting thing I've learned tonight. When you have Explorer.exe crashed in the debugger, you can't use ALT-TAB (which wasn't unexpected), but you can use WIN-TAB (Flip3D). That made switching back and forth between windows a whole lot easier.

It apears it wasn't the Space Siege demo that infected me with the DRM rootkit, it was the Mass Effect 1.01 update that did. I installed it, but installed the Space Siege demo right after it. So even though I did't get infected by their rootkit the first time, I was stupid enough to fall for it anyways.

Wednesday, July 23, 2008

Save your Visual Studio window layouts

Before I had my nice new laptop I had a normal desktop for work. It had two monitors so when I worked, I spread the windows of Visual Studio to cover both screens and give me maximum working space. If I worked from home I would connect to my work desktop using Remote Desktop and work like that. The problem was that this only gave me one monitor so I had to readjust the window layout for single-monitor usage. If I then would go to the office the next day, Visual Studio would still be in single-monitor window layout from the day before and I would have to drag all the windows around to get my dual-monitor layout back.

Now I have a nice laptop, so whether I work from home or in the office, I use the same machine. But I still have two monitors at work and only one at home. So even though I drag my PC with me everywhere, I still have the window layout problem.

Two or so years ago I discovered the VSWindowManager. This is a plug-in for Visual Studio that allowed you to save up to three profiles of different window layouts. You could then add three buttons to your toolbar to access those profiles. It did exactly what I wanted, but I still had two minor issues with it. First I couldn't rename the buttons, so I had to remember which button mapped to which layout. This wasn't that big of a problem. Secondly, and this was much more annoying, if you clicked the button of the profile you had currently loaded, it would save the window layout into to profile. So if I would come at work after having worked from home and I accidentally clicked the wrong button while trying to restore my dual-monitor layout, I would overwrite that layout with the then current single-monitor layout and I would have layout all the windows by had again.

If you look at the VSWindowManager project, it was last updated in September 2006. Now I do most of my work in Visual Studio 2008 and the plug-in doesn't work on this newer version. Sure, someone downloaded the source and got it to work for VS2008, but it didn't feel good to me if the original author had abandoned the project. So it was back to doing stuff by hand.

Earlier tonight I got fed up with it and decided to find a solution (again). I hoped that maybe someone else had written a similar plug-in. Instead, I found something way better. After googling a little, I came upon this MSDN article titled: Visual Studio 2005 IDE Tips and Tricks. It has a section specially dedicated to saving window layout and it provided a very simple solution to my problem. It uses the possibility to export user settings, together with a small macro to achieve exactly what I need.

Basically it works like this. You start by laying out the windows the way you like it. Then you export your IDE settings using the "Import and Export settings wizard", but in this wizard you can specify exactly what settings you want to export. So you choose only to export your window layout. You save the settings file in a known and accessible location (like your profile or documents folder) and name it so you know what settings these are. Then you create a new macro using the built-in macro editor. This macro will simply import the settings file. And since you only exported your window layout, this is the only setting that will be changed. So I can make a macro for my single-monitor setup and another macro for my dual-monitor setup. I can then add these macros to my toolbar and give them any icon and name I want.

Multi-/single-monitor problem solved!

Saturday, July 12, 2008

Code reduction through generic methods

I've build and now maintain a webservice application at work. The application has around 40 webservice functions and earlier this week I had to dive into the application again because of some issues. It occurred to me how redundant all the code in the webservice functions was. I'm talking about the top-level code, the stuff in the *.asmx.cs files.

In this application the code there deals with input validation, authentication/authorization and sending back the results of the operation (success or error data). So most of the code in those ~40 functions is basically the same. The only thing that differs each time is the specific business logic that the function offers to the world, but that logic is nicely abstracted from the interface part of the application in the business logic part of the application so it's only a few lines of code in each top-level function to setup and call that specific handler. The main code of each function is stuff that each and every function needs to do. All copied and pasted. That can be done better.

I started off by creating a base class from which all webservices should inherit. This base class would provide the functions for input validation (each input object now implements an IValidating interface that is called so the object can basically validate itself) and authentication/authorization. This still left behind a large chunk of code (large chunk being relative here. It was no more then 18  lines of code, but for most functions that was still more then 50% of code in that function). The problem is that each separate webservice function returns a specific return type based on the function itself. A function called "CreateAccount" returns a "CreateAccountResult" object, a function called "SetOptions" returns a "SetOptionsResult" object, etc. So it's not possible to write a helper method, or a method in the base class like I could with validation code or the authentication/authorization code. But then I remembers generic methods.

I don't understand why I've never done these optimizations before. I've used generic methods years before in C++ and I know it was possible in C#. I just never thought about it. So with this new idea I mind I started coding.

We throw our own exceptions in the application. And we make a distinction between two kinds of errors: functional errors and technical errors. The main difference between the two being that function errors are logged as warnings and technical errors are logged as errors. Function errors come from business logic decisions (invalid input, creating an account that already exists, etc) and technical errors come from unexpected problems (SQL server unavailable, External webservice unavailable, etc). So all our own exceptions inherit from one of these two base exceptions. Also all exceptions have an error message and an error code.

The results we return in our webservice functions also inherit from a common base (the BaseResult class) that provides them with at least an error message and an error code (which get filled from the exception). With all that in mind I copied and changed the code that looked like:

1 string errorMessage = "CreateAccount error: " + e.Message; 2 if (e is TechnicalException) 3 { 4 TechnicalException technicalException = (TechnicalException)e; 5 ErrorLog.Error(errorMessage); 6 result = new CreateAccountResult(technicalException.ErrorCode, technicalException.Message); 7 } 8 else if (e is FunctionalException) 9 { 10 FunctionalException functionalException = (FunctionalException)e; 11 ErrorLog.Warn(errorMessage); 12 result = new CreateAccountResult(functionalException.ErrorCode, functionalException.Message); 13 } 14 else 15 { 16 ErrorLog.Error(errorMessage, e); 17 result = new CreateAccountResult(-1, "An unhandled exception occured: " + e.Message); 18 }

Into a function in the base class that looks like:

1 protected T HandleException<T>(Exception e, string functionName) where T : BaseResult, new() 2 { 3 T result = null; 4 string errorMessagePrefix = functionName + " "; 5 6 if(e is TechnicalException) 7 { 8 TechnicalException technicalException = (TechnicalException)e; 9 ErrorLog.Error(errorMessagePrefix + technicalException.Message); 10 result = new T { Code = technicalException.ErrorCode, Message = technicalException.Message }; 11 } 12 else if(e is FunctionalException) 13 { 14 FunctionalException functionalException = (FunctionalException)e; 15 ErrorLog.Warn(errorMessagePrefix + functionalException.LogErrorMessage); 16 result = new T { Code = functionalException.ErrorCode, Message = functionalException.Message }; 17 } 18 else 19 { 20 ErrorLog.Error(errorMessagePrefix + ": " + e.Message, e); 21 result = new T { Code = -1, Message = "An unhandled exception occured: " + e.Message }; 22 } 23 24 return result; 25 }

And replaced all code that is similar to the first code block with this one line:

1 result = HandleException<CreateAccountResult>(e, "CreateAccount");

A significant reduction as you can see. I definitely should have thought about this before.

Tuesday, July 8, 2008

Windows Server 2008 on a laptop

Last week I got a new laptop for work. It's a nice fast Dell Precision M6300. On my old workstation I had Windows Server 2003 R2 running. I'm a developer, so I want to develop on the platform my software will run on. So on my new laptop I wanted to run Windows Server 2008 the Vista Server version of Windows.

Using this very nice blog entry from Vijayshinva Karnure I was able to make Windows Server 2008 look and work almost like Vista. Almost, because I'm missing two things that I have noticed so far. First off is the Windows Sidebar and secondly is Bluetooth support.

Now if you google for these problems, you'll find solutions to both of them. To enable Bluetooth support you'll need to install your drivers as usual and then perform some INF magic to persuade Windows to use them. To enable the sidebar you'll need to copy some files from a Vista installation and copy some registry settings. From what I read this will then work just fine, but I'm not comfortable with it.

You see, these things aren't part of the supported Windows platform, so if there are any security issues, you are on your own. Microsoft won't provide sidebar patches for Windows Server 2008, because they never released a Windows Server 2008 with Windows Sidebar. So you'll have to keep an eye on all security updates and patch the files by hand, or (more likely) let the software run with all its security issues. And that's not something I'm willing to do.

I also discovered, that if you install the Hyper-V because you might want to run some virtual machines for testing, sleep and hibernate will be disabled. Sleep and hibernate are incompatible with Hyper-V so you won't be able to use them. That sucks, so I uninstalled the Hyper-V, but at least I have the option of installing it if I do need it some time in the future. I will just need to learn to live without hibernate then.

And then I just ran into another issue. For remote workers my organization uses Check Point VPN-1 software, but as it turns out they don't have 64-bit versions of their client software. Even though 64-bit versions of Windows have been available for (rough estimate) 3 years now, they never saw fit to make their software work. According to their forums they will release a 64-bit version of some new software in Q4 2008 and this will fall under the existing licenses for the time being. In other words, you'll have to pay extra for 64-bit versions of their client software eventually.

Other then that, I'm pretty happy with my flashy new laptop :-)

Tuesday, June 24, 2008

Working with others = my glass ceiling

I've never really worked on projects that involved more then 3 people on the development side of things. That is, until January this year. At the start of this year my department started working on the biggest, most important piece of software we have ever build. And almost everyone from my team is involved. It's currently a 10-man project. 6 of which are developers.

So now we're 6 months in and we in the phase where I get to dig around in other peoples code. Be it for fixing bugs, or just plain and simple code reviews. And now I've started to notice something negative about me. I've noticed that I regard code that has not been written by me as bad code, even before I've take the time to examine it. I didn't notice it immediately , but I've started noticing it this week and thinking about it it explains some of my feelings towards certain modules.

Since I think the code is bad, even if it is better then what I would have written, I always want to refactor it, change it. And that's not good. Well sometimes it is, but not always. And when I do a code review I should pay particular attention that I judge code on how it works and not how I feel about it.

As a programmer you can get only so far flying solo. To progress as an effective programmer you must also learn to work well with others and be part of a team. This is something I know, but my unconscious doesn't appear to want it. Now I know I can start working on it. And in the end it will make me a better programmer.

Wednesday, June 18, 2008

I've never liked DRM

This time nothing about programming, but about my other great hobby: video games.

I recently bought Mass Effect for the PC. And it truly is an awesome game, but even though I bought it and I have the 2 DVDs lying right here on my desk, I never installed it. Instead, I downloaded the illegal, cracked, version and installed that. I figured it's not illegal since I did in fact buy the game. I also registered the CD key on the Mass Effect web site, so they know the game I bought was bought by me.

Why did I install the cracked version and not retail version? DRM.

The retail version comes with the SecuROM copy protection. But I don't consider it protection of any kind. It's system crippling software, that creates security problems on your system, makes it slow and unstable and only because the big video game publishers consider you a pirate no matter what. Mass Effect takes it even further. It requires you to activate the game and you can activate it only 3 times. After that you must purchase (yes, you must buy a new copy of the software) a new CD key. They planned it even worse. They wanted the game to require re-activation every 10 days. But after public outcry they changed that to unlimited time, but only 3 activations.

So when do you need to reactivate? When you get a new system. And a new system in this case is when you reinstall Windows, or replace some hardware. From an article linked above it seems to consider a new video card to be equivalent to a new system. That's as brain dead as Windows Vista needing reactivation because replacing my WiFi card must mean I have a new system.

Normally people would "vote with their money". Not buying the game would send a clear signal, but I think that in this case it would only play on the paranoid minds of the publishers. If people didn't buy the game because of the DRM, they'd just figure people where pirating it and they'd need more DRM the next time around. Never even considering that the DRM is what made people not want to buy it in the first place!

When will these people get it into their fracking heads that DRM is a big burden for honest consumers who bought the product, while it's no problem at all for the pirates. The game is cracked, the DRM removed, as soon as the game is available. It's just pestering the people who pay your salary. And it should stop!

Sunday, June 8, 2008

Counter-intuitive optimisation

I've always been interested in game programming. I never actually managed to create a full, completed game, but from time to time I do start another project. Currently I'm messing around with the XNA Game Studio 3.0 CTP. When Googling for some answers on some game-programming-related problem, I ran into a developers journal on The journal was pretty interesting and I started reading all the entries. It's about a developer who's creating a space MMO. This particular developer mainly focuses on the graphics engine, so while all the specific graphics lingo and math is lost to me, I do understand the more general programming bits and one bit really surprised me.

On November 7, 2007 he wrote a piece about optimising a noise function. His project uses a noise function coded in a high level programming language and it appeared too slow. So this developer decided, like many other developers would, to rewrite the function in low-level assembly to gain performance. This kind of manual optimisation is very hard, but generally yields pretty good results. But when he hand-optimised the function it wasn't faster. In fact it was 2 times slower then the implementation written in the high-level programming language. Programming in assembly language is hard. It's very low-level and you need a really good understanding of how the CPU architecture works to get the best out of it. So it's not surprising that a compiler written by a lot of very smart people will produce faster code then a single, simple developer can product on his own. But this didn't appear to be the problem. No, the problem was using a lookup table.

A lookup-table is a very simple (and usually very effective) optimisation technique. If the calculations are slow, but limited in volume, it's easy to just calculate all the possible values one time and store it so that later calculations don't need to be calculated, but can just be looked up. In a lot of game-programming books I read this was usually the first optimisation authors would recommend. It's easy and it just works. Not anymore it seems. After the developer had reprogrammed the noise function in assembly language the bottleneck didn't appear to be the calculations, but the memory accessing in the lookup-table. So by eliminating the lookup-table and just calculating each thing every time, he made the function faster. And that is not something I would have though of.

The funny thing is, this is actually something Steve McConnell warns about in Code Complete. In this excellent book he warns about optimising performance by 'guessing' what's faster. Things that 'feel' like they would be faster aren't necessarily faster. And things that where faster last year, won't necessarily be faster on this year's CPUs, compilers and operating systems. The only one true way to determine what is faster and what is not, is by measuring.

Saturday, May 31, 2008

Clouds for the masses

I've been hearing about "The Cloud" for some time now. Mostly from Microsoft engineers. There is 'Cloud Computing' and 'Cloud Storage'. Sending things to 'The Cloud'.
The cloud in this case is of course the internet. They use the term "cloud" because when you draw a network diagram you usually symbolize the internet with a cloud.

So as a developer I've been hearing about clouds for some time now. And now Valve (makers of Half-Life and Portal) have announced Steam Cloud. The Steam Cloud framework enables games to store settings and saved games on their servers on the internet (in the cloud). Aside from this being totally awesome, it's just funny they chose the 'cloud' name. Of course it fits perfectly with their 'Steam' brand name and it makes sense to most people because steam produces clouds. But to the developer it also makes sense, because it does something in "The Cloud".
I thought it was funny to see a developer-centric term go mainstream without losing its meaning (even though non-developers probably won't give it the same meaning developers do).

Monday, May 26, 2008

An internal API exposed as SOAP is not a webservice

As the system we build at work grows, more other systems are to be connected to our system. So is the case with a system that handles domain registration and hosting products for our smaller customers.

This system is originally a self-contained system. It's a web-application that has all the work-flow, provisioning and billing build in. And now some architects have decided that it should be connected to our system, so the big commercial systems can all offer these products without having to resort to manual steps performed by human operators.

There was resistance. The product managers and designers didn't like that idea. I assume the builders don't like it either, because the development is out-sourced and that would mean less income for that company. So after some political 'nudging' it was accepted that our system should be able to talk to theirs. And they we're going to give us webservices that enabled our system to offer the same products and functionality as their web interface does.

When I got the documentation on the webservice I was unpleasantly surprised. There are a lot of methods. And they're all small methods. They all have a very, small, singular purpose. As far as I can tell, they took the public properties of their Business Logic Layer and their DataAccess Layer and made those all into SOAP methods and called it a nice webservice interface. It's going to be a challenge to get that working smoothly...

Saturday, May 17, 2008

Your reference is not my reference

This week we had a discussion at work. It was about reference IDs. When a system calls the webservice on another system, sometimes you need to specify a reference ID. One such case is with asynchronous webservices. The system does the call, but does not get an immediate answer (other than: "Yeah, we're working on it"). And at some point in the future the system gets a call from the webservice (a callback) to return the result of the webservice. One way (and in my opinion the easiest way) to deal with this is to have the system send a reference ID with the original request and have the webservice put the reference in the asynchronous result. This way the system can correlate the result with the request and process it.

But as we found out today, that's thinking in "functions". Meaning that calling a webservice function and getting it's results is an operation that's completely separate of anything else. Any other thing can occur between calling and receiving a result and after you get the results, if there is a process, that ends that part of the result. But for some calls we need to think in "processes".

Our system is like a webservice bridge. We expose a generic interface to commercial systems and we consume very specific interfaces from technical back-end systems. So when a commercial system sends us a request, we map it into a specific request to a back-end system. This way the commercial system doesn't need to know all the gritty details of the technical system. Also all of our calls are atomic....except for two. Normally the commercial system deals with the process and it has never had any effect on our system. Until we needed to support a certain function that did. Now in my opinion the biggest problem was caused by a faulty design of the technical back-end, but it did expose a pit-trap we could fall in ourselves now that we are designing a new interface for the commercial systems. Sometimes you not only need to know what request was responsible for the incoming result, but also which commercial process it's part of. This is required if you need to perform certain steps in order and the technical back-end system also needs to follow-up on an earlier request with a follow-up request.

So we had a discussion about this. Do we take the reference form the commercial system and pass it on to the technical system. Out-sourcing the generation of references as it were? I was opposed, because I think a reference should only have meaning to the system that generated it. If we use the reference from the commercial system, we give it a meaning it shouldn't have. Also, if some time in the future we add more commercial systems there is a chance (and knowing Murphy it will happen) that more then one commercial system will come up with the same reference and then we have a problem.

My take on the whole situation is that we apparently need both a "function-reference" and a "process-reference". We generate our own function reference and use the process reference of the system that guards the process. This could be our system, in which case we generate the process reference, or it could be a commercial system, in which case we just take the reference and pass it on.

Tuesday, May 6, 2008

Some Ajax/WCF things I learned

I'm reworking a small ad-hoc web application I wrote because like all ad-hoc tools, it got a rather permanent status. It's like they say: "Nothing is as permanent as a temporal solution".
I wrote the first version using ASP.NET MVC. The only reason I did so was because I wanted to experiment with MVC and they told me to create the application in any way I saw fit. So it was a great way to experiment a little. Now the web application will be used on a more permanent basis and we will have to actively maintain it I decided to rewrite it. Also an additional requirement made the whole MVC thing a little harder then I ought to be. So I rewrote the web application, but this time I decided to use a WCF JSON webservice and go all-out Ajax. Still experimental, but less CTP-al.

So my first order of business was to write a WCF webservice that did the things I needed it to do. This was easy. No hurdles there.
Did you know you can put a [DataMember] attribute on a private member? It will even show up in the WCF service while still being private in your code. I used this because I had some data that needed to be exposed in the webservice as a string, but as a different data type internally. So I made a private property that made a string out of that data and put the [DataMember] attribute on it. And then I had a public property without the attribute for use inside the code. Very handy.

Next up after getting the WCF webservice done was consuming it through JavaScript. Nothing really difficult here either. Just Google for JavaScript and WCF and you'll find lots of examples. That's what I did. The only thing I did ran into was that at first I tried to have two separate web projects. One for the WCF webservice and one for the web application. For some reason I never got the JavaScript to work with the WCF webservice. I don't think it's impossible, but after fiddling with the WCF configuration for an hour I gave up and just made it into one project and it worked.

After I got the WCF/JavaScript communication to work I started to work on the JavaScript logic. And in doing so I learned some stuff about JavaScript.

JavaScript isn't really object oriented, but you can fake it.
You can create a new 'object' using the new keyword and a function declaration:

var myObject = new MyObject(){}

If you leave out the 'new' keyword the variable "myObject" would contain a reference to a function named "MyObject". A function that doesn't do anything. But add the 'new' keyword and now it's something like an object.
Unfortunately I have no idea how to proceed any further. It's possible to add properties and methods to the object. but I'm not 100% sure how.Also I had the site working in Firefox and Internet Explorer 8 beta, but when I tried it with Internet Explorer 7 it didn't work. It turns out that doing something like:

var element = document.getElementById("someId"); element.setAttribute("class", "cssClass"); element.setAttribute("onclick", "alert('Yay!');");

Works perfectly in Firefox and IE8, but not in IE7. The annoying part is that if you use the Developer Toolbar in IE7 and inspect the DOM properties of the element they look good. I created a similar element in straight HTML and compared it to the element I added and setup dynamically through JavaScript, but I didn't see any difference. Yet it didn't work.
Turns out you can't do this in IE7 and you need to use the proper properties of the element to set stuff up. So the above needs to be:

var element = document.getElementById("someId"); element.className = "cssClass"; element.onclick = function() { alert('Yay!'); };

Luckily this also works in Firefox and IE8.

More as I learn more...

Sunday, April 27, 2008

Unit-testing private methods

Suppose you have a class that has some private methods and you want to make sure these methods function correctly, so you want to write unit-tests for them. The problem is that you can't call these methods, because they are private.

The class we want to test:

public class ClassWithPrivateMethods
     public int Add(int firstNumber, int secondNumber)
         return firstNumber + secondNumber;

     private int Substract(int initialNumber, int numberToSubstract)
         return initialNumber - numberToSubstract;

Testing the "Add" function is simple. It's public, so we can just call it from a unit test:

public void TestAddition()
     ClassWithPrivateMethods classToTest = new ClassWithPrivateMethods();
     int result = classToTest.Add(5, 5);
     Assert.AreEqual(10, result);

Testing the "Substract" function is a problem. You can't call the method from the unit-test. One way around this is creating a new class, specific for the unit-test, that inherits from the class you want to test and use the "new" keyword to make the function available as a public function. The problem with this approach is that the method can't be private. It has to be protected at the minimum:

public class ClassWithPrivateMethods
    public int Add(int firstNumber, int secondNumber)
        return firstNumber + secondNumber;

    protected int Substract(int initialNumber, int numberToSubstract)
        return initialNumber - numberToSubstract;

Which means you'll have a test class like this:

private class ClassWithPrivateMethods_TestWrapper : ClassWithPrivateMethods
    public new int Substract(int initialNumber, int numberToSubstract)
        return base.Substract(initialNumber, numberToSubstract);

And a unit-test like this:

public void TestSubstractionInt()
    ClassWithPrivateMethods_TestWrapper classToTest = new ClassWithPrivateMethods_TestWrapper();
    int result = classToTest.Substract(10, 5);
    Assert.AreEqual(5, result);

I've done this before and it works well. But this way you still can't have private functions.
Enter reflection.
With reflection you can do anything you want to the class. Even invoke private methods! So we'll change the helper class into this:

private class ClassWithPrivateMethods_TestWrapper : ClassWithPrivateMethods
    public int Substract(int initialNumber, int numberToSubstract)
       Type t = typeof(ClassWithPrivateMethods);
       return (int)t.InvokeMember("Substract", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.InvokeMethod,
       null, this, new object[] { initialNumber, numberToSubstract });

Ofcourse you don't really need the wrapper class, you could just as easy put these two lines of reflection stuff inside the unit-test if you wanted. But if you do use the wrapper class, you can just instantiate it and call the "Substract" method for your unit-testing purposes, just like the unit-test above.
This also works for methods that have been overloaded for multiple data types. Suppose we have the following class we want to test:

public class ClassWithPrivateMethods
    public int Add(int firstNumber, int secondNumber)
        return firstNumber + secondNumber;

    private int Substract(int initialNumber, int numberToSubstract)
         return initialNumber - numberToSubstract;

    private float Substract(float initialNumber, float numberToSubstract)
        return initialNumber - numberToSubstract;

Our wrapper class would look like this:

private class ClassWithPrivateMethods_TestWrapper : ClassWithPrivateMethods
    public int Substract(int initialNumber, int numberToSubstract)
        Type t = typeof(ClassWithPrivateMethods);
        return (int)t.InvokeMember("Substract", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.InvokeMethod,
        null, this, new object[] { initialNumber, numberToSubstract });

    public float Substract(float initialNumber, float numberToSubstract)
        Type t = typeof(ClassWithPrivateMethods);
        return (float)t.InvokeMember("Substract", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.InvokeMethod,
        null, this, new object[] { initialNumber, numberToSubstract });

And you can unit-test like this:

public void TestAddition()
    ClassWithPrivateMethods classToTest = new ClassWithPrivateMethods();
    int result = classToTest.Add(5, 5);
    Assert.AreEqual(10, result);

public void TestSubstractionInt()
    ClassWithPrivateMethods_TestWrapper classToTest = new ClassWithPrivateMethods_TestWrapper();
    int result = classToTest.Substract(10, 5);
    Assert.AreEqual(5, result);

public void TestSubstractionFloat()
    ClassWithPrivateMethods_TestWrapper classToTest = new ClassWithPrivateMethods_TestWrapper();
    float result = classToTest.Substract(10.0f, 2.5f);
    Assert.AreEqual(7.5f, result);

Wednesday, April 23, 2008

The blame game, or: It works on our end.

The system I'm working on is in the end-to-end testing phase. It's being tested for the past 2 weeks or so and today the testers ran into an issue where request that they sent from the first system  timed-out. Their view on the issue: It's our problem.

So we did some problem analyses. We checked some log files, but we never even saw a request coming into the webserver. Then I sent the same request they we're trying to do and it got accepted, it got processed and they even got the callback to notify that the request had been completed. So we established that our system worked. Even the engineer from that other system agreed to that.

So the engineer went to check on stuff. And at the end of the day I got an e-mail that basically stated:
"We checked everything. The request is correct and we have the correct URL configured. So the only conclusion in that is must be a problem in [the system I work on]. And we'd like them to solve the problem ASAP."

This reminded me on a joke I read in David Platts excellent Why Software Sucks...and What You Can Do About It.  It's a joke about Microsoft in  a chapter about Microsoft. The short version is something like this:

A Microsoft engineer goes into the army and needs to shoot at the shooting range, but he misses every time. The drill-sergeant  gives the Microsoft engineer craps and tells him to do better. At which point the engineer holds his finger in front of the nozzle of the gun and shoots his finger off. Then he points the bloody stump towards the target and says: "Well, it comes out here alright, so the problem must be on their side".

This is exactly what I thought about when I read that e-mail and it's this kind of shortsightedness  that really pissed me off. A good engineer, in my opinion, always assumes the problem is in his code. Only after very careful examination and testing should you state it's probably a problem on the other side. And even then the engineer should fully cooperate in finding out the problem. After all, it's not like we're competitors. We're all building parts for the same product and work in the same company on the same goals.
With this kind of thinking we'll never deliver a good system.

Tuesday, April 15, 2008

LINQ means working fast

Now granted, I have not yet done a really big project with LINQ and I have really limited experience with LINQ so far, but let's not kill the hype here, okay? :-)

At my office we have almost finished our part in a whole connected chain of systems to provision products that my company offers to its end-users. The system we build is an abstraction on top of the technical provisioning systems and through our system they are integrated for use by the commercial systems at the top of the chain. In the beginning of last week they told me that one of the big commercial system would need more time to get done. "More time" in this case meant around a month. Knowing how good estimates by programmers are, I'm positive it will be more then a month, but I digress. Even with that delay, they still want to start the pilot project and start using parts of the provisioning chain. So they asked me to build a kind off web-front-end to our system. So that they could start handling orders manually at the very least. And while the big commercial system was delayed for (at least) a month. They asked me to do it in 2 or 3 days. And because they were desperate they told me to build it no matter what. The front-end would contain next to no logic and would only need to communicate with out system, which I know very well, so I decided to build it with some new technology I was itching to try out, so I chose to build it with ASP.NET MVC and LINQ.

I had played with ASP.NET MVC before, but they updated the preview bits and I found some welcome changes. Aside from spending some extra time in getting the URLs to look nice it wasn't really that challenging. No, the fun part was in using LINQ.

In my team we have defined a set of "best practices" or some development "rules" and one of the rules is that we always use Stored Procedures (sprocs). So when I write a Data Access Layer (DAL) for any program I always create a data-model (the database tables), then I create sprocs for all the things I need to do to the data that will be stored in the database (reads, inserts, updates, etc) and then I will write some C# code to access these sprocs and to create C# objects out of the returned data (and of course code to be able to input data to the sprocs). This time I chose to go with LINQ and I've never written my DAL in such a short time. Even if I still had to find out some basic things on how to use LINQ.

I think that I wrote my DAL with LINQ in less then 20% of the time it would have taken me if I had to use our 'conventional' way. It's so simple. Now, be aware that I did not do any complicated stuff here. Just you basic selects, inserts and left outer joins. Nothing more. But I think that's exactly where the strength of LINQ lies. All the simple stuff is trivial to do in LINQ, but takes more effort to do as sprocs and SqlDataReaders. Why waste your valuable development time on that?

I encourage every developer out there to try LINQ. Give it a good whirl and don't judge the LINQ book by its cover. Yes, it generates dynamic SQL, but I've some tests (some very unscientific tests) and found that LINQ is faster then using SqlDataReader with sprocs on simple queries. I just took a large table (500,000+ items) with some 12 columns and made an sproc and a LINQ query (that resulted in the exact same SQL as the sproc) and did some loop testing. When doing the same query as fast as possible in a loop, LINQ will be faster then the SqlDataReader/sproc method until around 300 queries. After that LINQ seems to be getting slower. Personally I think this has to do with the fact that for the sproc SQL Server only needs to see a single word (the name of the sproc) to recognize what you want and re-use it's caching, whereas with the LINQ query it will need to compare the whole query to notice it already exists in the cache. And maybe network traffic also plays a role. Network traffic is less with an sproc. So with these test results in mind, sprocs would only be faster if you have a really high-traffic site (but even then you should measure before you start to optimise). And if you want to be Agile. You'll release early and often. So you can start cramming out features while using LINQ for your DAL so you won't waste precious time on sprocs and such, and then replace the LINQ DAL with an sproc DAL when you start to notice the performance hit. Or you combine them, which is also entirely possible.

Sunday, April 6, 2008

About being a good programmer

I consider myself to be a good programmer. I am my own worst critic, I always question methods and don't use methods "just because I did last time". I always assess if they are appropriate for the new challenge.
I also try to keep up with new technologies and trends in development and although I can be a bit too enthusiastic about certain things, I never really use new technology just because it's new.

At the office my team is currently at the end of a very big (for us) project. As always, the time was short and the deadline was set in stone, but no one really knew exactly what we had to build. So me and 6 colleagues started working on the project. It was jump-started by a Software Architect, who created the very basic architecture and then I took over.
The system is basically a routing and handling system for product provisioning and we envisioned a program where we would just write a plug-in for a specific product to handle the product specific tasks. That way we can easily extend the system for new products.
In the initial planning we had the system to handle 4 products, of course this soon turned into 5 products and then 6 products. So me and two other colleagues set off, each assigned his own product(s), to build the plug-ins.
One of the colleagues copied the basic architecture of his plug-ins from my plug-ins, but the other colleague took a whole different route.

After initial programming was done and we were in the test phase, we finally had time for code reviews. So as I was reviewing the code of the colleague who took a very different route then I did, I found it hard to determine what he was doing. I didn't get it and his solution seemed overly complex to me. Then he explained his architecture....

I was shocked. His architecture was much better then mine. I was so set in the mind-set of my architecture that I failed to see the beauty of his decisions. I was shocked and ashamed. Ashamed because I thought my design was better then his before he explained it to me.

This reiterated a very important lesson to me. I am not perfect, I won't always do the best thing. I might not even be close from time to time.
One of the things I consider add to me being a good programmer, is that I will admit if I am wrong and that I will accept a better solution, even if my solution was the inferior one. I'm not too proud to admit I am not perfect, but somehow, this time, it kind of sneaked up on me.

The lesson I take away from this? Don't judge too soon and don't think you've got the best solution. There will always be a better solution.

Saturday, March 22, 2008

Weakly typed webservices

Weakly typed programming versus strongly typed programming is one of the many religious wars fought by programmers. In my opinion it's all a matter of personal preference and I myself prefer strongly types languages. They give me the compile-time errors when I do something really stupid and I like that, even if it means I must work a little harder to convert one type to another.

On the other hand, I have found out that I don't like strongly typed webservices. What is a strongly typed webservice? Well, consider that a webservice is basically just some XML send over an HTTP connection (yes I know there are a lot more varieties, but when most people talk about webservices they tend to mean SOAP over HTTP), so everything is basically a string. A weakly typed webservice would have all its inputs and outputs defined as strings. A strongly typed webservice on the other hand would have its inputs and outputs defined as the type of data it's supposed to represent: ints, bools, DateTimes, etc.

To make the discussion a little more clear, here is a weakly typed webservice (in C#):

public string WeakWebservice(string inputDate);
And this is a stronly typed webservice:
public int StrongWebservice(DateTime inputDate);
So why would I want a weakly typed webservice?

Consider that all values are sent over HTTP as text, so that DateTime parameter 'inputDate' for the strongly typed webservice is possibly sent as "2008-03-20T18:39:12Z".

The SOAP stack at the webservice side will then take this string and interpret it so it can be turned into a DateTime object and given to the webservice entry point. At that point the code you or me wrote will get in on the action and is allowed to do stuff. Everything before that is done by the SOAP stack (in my case the ASP.NET SOAP stack).

Since everything is sent as text, there is no objection to calling the webservice using a client that doesn't understand the .NET DateTime class. All goes well as long as the client sends the text formatted in a way the SOAP stack expects it (as is defined in the SOAP specification). For as long as my client sends "2008-03-20T18:39:12Z", it will be interpreted and turned into a DateTime object for me. But what happens when I make an error in my string and I send something that can't be interpreted as a .NET DateTime object?

You'll get an error:

System.Web.Services.Protocols.SoapException: Server was unable to read request. ---> System.InvalidOperationException: There is an error in XML document (11, 74). ---> System.FormatException: Input string was not in a correct format. at System.Number.StringToNumber(String str, NumberStyles options, NumberBuffer&amp; number, NumberFormatInfo info, Boolean parseDecimal) at System.Number.ParseInt64(String value, NumberStyles options, NumberFormatInfo numfmt) at System.Xml.XmlConvert.ToInt64(String s) at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReader1.Read3_CustomerInfo(Boolean isNullable, Boolean checkType) at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReader1.Read21_IptvGetAccountInfo() at Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer.Deserialize(XmlSerializationReader reader) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events) --- End of inner exception stack trace --- at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle) at System.Web.Services.Protocols.SoapServerProtocol.ReadParameters() --- End of inner exception stack trace --- at System.Web.Services.Protocols.SoapServerProtocol.ReadParameters() at System.Web.Services.Protocols.WebServiceHandler.CoreProcessRequest()

I don't know about you, but I find this error hardly useful. Now consider a webservice function with 3 objects as parameters, each object containing more parameters a few levels deep. How useful is this error then in finding which of the parameters is wrong? Granted this is specific for .NET and I have no idea how Java for example would handle this, but you see my point?

I my company we return error codes and error messages as part of the return value of the webservice. We call these "functional errors". We decide which error code means which error situation and we don't use SOAP Faults or HTTP error codes. That is, we don't want to. If the webservice isn't available, or the authentication goes wrong you'll still get an HTTP error and in the error shown above, you'll get a SOAP fault.

But if the specification says you need to send a string no more then 10 characters long, we check that in de code and return an error code if the string is to long. But if the specification says you need to send a number, but you send a word, then you'll get a SOAP Fault. That's not consistent.

The next time I'm tasked with creating a new webservice, I will make it weakly-typed. All the input and output will be defined as strings. That way I can validate the input myself and return a nice functional error when it's incorrect.

Sunday, March 16, 2008

Stored procedures, faster or not? (SQL Server)

I'm not really that knowledgeable about databases. I can do a select or two and I understand joins, but anything more complex that then that and I won't like it. I don't really like SQL. The language seems very illogical to me for some reason.

We do have a lot of discussion at work about databases. A colleague of mine really loves delving into database problems and he even made a list of SQL standard practices. One of these standard practices says that we should always use stored procedures instead of dynamical SQL statements.

I always want to know the 'why'. Why are we supposed to use only stored procedures? And the reasons invariably are:

  1. Stored procedures are not susceptible to SQL injection.
  2. Stored procedures are safer because you can secure them on the database level.
  3. Stored procedures enable us to update database logic or fix a bug without having to release a new version of the software.
  4. Stored procedures are faster then dynamic SQL.

Almost two years ago and ran into a weblog post that debunked all of these points: Stored procedures are bad, m'kay?.

Given this article, there is still a lot of discussion at the office and people generally still say stored procedures are faster and safer. So yeah, stored procedures are faster then SQL created by adding SQL statements and their parameters together with string concatenations, but according to the article they're not faster then properly parameterized SQL queries. And as for security at the stored procedure level, we don't do that in our office, so that shouldn't be a concern. Also, when changing a stored procedure, we still need to create a new release of the software, even though only a SQL script changed. That's just the way we work. So the 'software release' argument is also nonsense.

I will agree that stored procedures are faster in that you only send the name of the query and the parameters, while with dynamic SQL you'd need to send the entire query with the parameters. So it's faster in that you save a few bytes sent over the wire. And for some really rare cases those few bytes can make a difference, but not in the applications we make.

In the end it's a discussion about what 'feels' better I think. One of the many religious wars fought in the field of software development.

Thoughts on Coding

I've been programming for close to 14 years now. I started programming in x86 assembly, then moved to Turbo Pascal, then Turbo C++, Visual C++, PHP and finally C#. It started out as a hobby and finally it turned into work.

I still program for fun, although not nearly as much as I used to. I'm very busy at work, so when I get home I'd rather play some video-game, watch a movie or spend some time with friends.
I do read a lot about programming or programming-related stuff though. I read a whole host of different weblogs and try to keep up on the latest advancements and technologies.

So, why the weblog? Well, there have been countless times where I ran into a problem, or didn't know how to fix a particular problem and finally found the answer on some persons weblog. These things have been of tremendous value to me. Since I also figure things out on my own on an almost daily basis and since I think a lot about design stuff and a lot of other things programmers have to deal with, I thought: "why not put these on a weblog?" I mean, maybe someone out there has the same problem and maybe I can help him out by explaining my solution. And maybe I have a really crazy idea and I would like some feedback on it by other programmers. A weblog is a dialogue after all.

So, if anyone has questions, or remarks about anything I post here. Leave them in the comments and we can discuss them. I'm looking forward to it.