Saturday, December 5, 2009

DataAnnotations in ASP.NET MVC2 without Dynamic Data

The last few days I’ve been playing with VS2010 beta 2 and ASP.NET MVC2. I hadn’t done that much with the first version of ASP.NET MVC, but I hope to get some more time to play with version 2.

After seeing how easy validation of data input can be in the PDC video on ASP.NET MVC2 from Scott Hanselman, I wanted to try it out for myself. I had already started working on an MVC2 project and it used manual validation. So I added the data annotation attributes on the class and….it didn’t work… Turns out that the thing that transforms the HTTP form data into a .NET object, the Model Binder, needs to support it and the default binder is very simple and does not support it. If you talk to a database with the Dynamic Data Framework somewhere along the line a model binder is used that does support it and apparently Scott used dynamic data in his demo.

The project I had started was using the ADO.NET Entity Framework which does not have support for data annotations (yet) and I didn’t feel like switching to the Dynamic Data Framework just for the validation. But then it turns out you can easily(?) write your own binder and make it support the data annotation attributes. Or rather, extend the default binder. I found this at a blog post by Brad Wilson about using data annotations in ASP.NET MVC and it even pointed me to the sample project he talked about in his blog post. I downloaded it and tried to build it as a .NET 4.0 project, but since the sample was written for MVC1 and .NET 3.5SP1 I got compiler errors about missing functions at first and when I got it to compile I got a nasty runtime error:

This property setter is obsolete, because its value is derived from ModelMetadata.Model now.

The compiler errors about the missing function took some reasoning, but you need the change this code which appears in two places and is slightly different in the second place:

if (!attribute.TryValidate(bindingContext.Model, validationContext, out validationResult))
{
   bindingContext.ModelState.AddModelError(bindingContext.ModelName, validationResult.ErrorMessage);
}

Into this:

validationResult = attribute.GetValidationResult(bindingContext.Model, validationContext);
if(validationResult != ValidationResult.Success)
{
   bindingContext.ModelState.AddModelError(bindingContext.ModelName, validationResult.ErrorMessage);
}

After some searching for the runtime error I found an answer on StackOverflow. The code discussed in that article is a little different then the code in the binder sample I am using, but it was enough for me to get it to work. Basically you need to change this:

var innerContext = new ModelBindingContext()
{
   Model = propertyDescriptor.GetValue(bindingContext.Model),
   ModelName = fullPropertyKey,
   ModelState = bindingContext.ModelState,
   ModelType = propertyDescriptor.PropertyType,
   ValueProvider = bindingContext.ValueProvider
};

Into this:

var innerContext = new ModelBindingContext()
{
   ModelMetadata = ModelMetadataProviders.Current.GetMetadataForType(() => bindingContext.Model, propertyDescriptor.PropertyType),
   ModelName = fullPropertyKey,
   ModelState = bindingContext.ModelState,
   ValueProvider = bindingContext.ValueProvider
};

And it will work.

Next was finding out how to use it exactly. The blog post about the sample binder would receive the entire model when the function is called on the controller, but my code only received an identifier and I need to get the rest of the data out of the database. So just using “ModelState.IsValid” wasn’t enough in my case. I also needed to call “UpdateModel” to have the binder transform the input data in the .NET object that I use in my code. The problem I ran into there was that upon updating the model, the validation rules would also run and an exception would be thrown when the validation failed. This code gives me the exception:

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Edit(string lastName, FormCollection collection)
{
   Person person = new Person();
   UpdateModel(person);
   PersonController.personStore[lastName] = person;

   return RedirectToAction("Details", person);
}

It turns out I should not be using UpdateModel, but rather TryUpdateModel. That allows me to update the model and the validation to fail without resorting to exception handling. Like so:

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Edit(string lastName, FormCollection collection)
{
   Person person = new Person();
   bool isModelValid = TryUpdateModel(person);
   if(isModelValid == true)
   {
       PersonController.personStore[lastName] = person;

       return RedirectToAction("Details", person);
   }
   else
   {
       person.LastName = lastName;
       return View(person);
   }
}

Doing it like that gives me the nice error message I was looking for:

nice-error

Thursday, July 23, 2009

Windows Workflows that restart themselves?

A few weeks ago I got assigned a fun bug. We have a WF-based order system and it seems that occasionally orders that had been send earlier would start all over by themselves.
And sure enough, when looking at the logging I saw that the order would start over. So how did this happen?

First off I noticed it only happened with orders that required a callback in their process. So orders that, at one point or another, would be idle and waiting for an external event. All other orders never gave a problem.
Then I also noticed that the orders restarted themselves after the system itself had been idle for around 30 minutes.
So workflows that were idle were restarting after 30 minutes of system inactivity. They were re-processed when a new order was sent (you'd then see a whole slew of old orders restarting).

Whenever something happens on IIS after around 30 minuted of inactivity I assume it's always related to the fact that after 20 minutes of inactivity (by default) IIS will stop the web service/application to spare resources. The next request that comes in will cause IIS to startup the web service/application again. So it's pretty much safe to assume the problem is related to the web service stopping and restarting.

So why would an order start all over again, after the web service is being restarted. The most likely reason would be that the Workflow Runtime was unable to save its state. So without knowing where the workflow left off, but knowing that the worklfow does exist, it seems logical the workflow would just restart.

We checked the config and it did have the WorkflowPersistenceService configured and loaded. However, it's "UnloadOnIdle" setting was missing. Meaning it defaults to false. Meaning workflows don't unload when they are idle. And more importantly: a workflow is only persisted when you explicitly tell it to, or when it is unloaded.
Since neither happened on our system the workflows never stored their state and restarted when the web service restarted.

Of course this was not figured out that quickly. I assumed the operational engineers would use the configuration we'd send them, so I never bothered to check it. If I had, I would have solved this bug in a matter of minutes. Now it took us days of prodding, testing and praying. Until another developer mentioned he had noticed the config looked almost the same, but not quite the same. *sigh*.
Bugtracking rule #1: Always check the config!

Saturday, June 13, 2009

EU wants other browsers in Windows

The European Union is working on a anti-trust case against Microsoft. They claim that the consumer (of the Windows operating system) has no choice in its browser and only has Internet Explorer because it is bundled with Windows. The EU wants Microsoft to start bundling other browsers (Safari, Opera, Firefox, Chrome) to give the user more choices.

I find that an absurdly stupid idea. I can understand that they would want to give people a choice. but let’s face it, most computer users are morons when it comes to computers (no offence). They don’t know anything about computers. Hell, they never updated their software until Microsoft set the default to “don’t ask, just update”. And know they should need to choose between difference web browser? That’s a recipe for disaster.

Aside from users not knowing that they want or need, there are also business issues to consider. Let’s say you buy Windows and you choose to install Safari. And now when you visit your favourite web site, Safari crashes every time. Who will you call for support? Well, Microsoft of course. It’s their Windows system you bought and Safari was included with that. So now all of a sudden Microsoft has to offer support for Safari, or Opera, or Firefox, etc.

Also, now it appears there is a major security issue with Opera, which you installed because it came with Windows. Who is going to fix that? Not Microsoft, because they don’t make that product. Who are you demanding should fix it? Well, Microsoft of course. Because it came with their Windows system did it not? Can Microsoft guarantee that Opera follows its security guidelines and makes the software as safe as possible?

Suppose there was a security issue and Opera fixed it. How are you going to get the security fix? I highly doubt its going to be through Microsoft Update. So the update will probably not be installed automatically by default. And every browser will have its own system for updates.

Microsoft also has strict guidelines on how programs look and work. Safari is made by Apple. In all honesty I’ve never used Safari for Windows, but I assume it will look strikingly similar to Safari for Mac OS. I do use iTunes and it looks nothing like a Windows application, because it’s made for a completely different OS. Apple has it’s own user interface guidelines and they are a lot different from the Microsoft guidelines. So now you have out-of-the-box software that looks (and behaves) completely different than all the other software that is installed.

So, yeah, I’m not much for bundling different browser with Windows. I would love to present people a choice, but Windows should stick with Internet Explorer. Why not, upon first connection with internet, give people a page that tells them they have a choice in browsers and give them links to different websites for those browsers. But bundling software made by completely different companies with Windows is a stupid idea.

And just so you know: I’m a Firefox user and only use Internet Explorer at work, because our intranet doesn’t work correctly with Firefox.

PS
This is not my original idea, I read it somewhere else, but why give people only the choice between Internet Explorer, Firefox, Safari, Opera and Chrome? Why not all of the gazillion different web browsers that are out there? Doesn’t seem fair to the little guy. On the other hand Opera is pretty much “the little guy” and since they are furiously trying to get their browser bundled with Windows I really doubt they’re in it for “what’s good for the customer”. As I see it, they just use this to try and increase their market share.

Friday, February 20, 2009

Error loading workflow

Have you ever seen an error message like this when developing a Windows Workflow application?

ErrorLoadingWorkflow

I have. Lots of times.

I’m not sure what is causing this. We have a project that contains the workflow and we have several other projects that contain the activities we use. Also we do a little inheritance, where all workflow inherit from a base workflow and all activities inherit from a base activity. The only thing they inherit are properties and functions. We do not have a basic workflow with activities which other workflow then add to. Although I would really love to have that!

So far I discovered two possible solutions workarounds:

  1. Rebuild the projects containing the workflow classes.
  2. Restart Visual Studio.

Solution #1 is preferable, because otherwise, if you have a large solution, the time it takes for Visual Studio to load your solution can get really annoying. So what I did was create a simple small MSBuild script that cleans and then builds all my workflow project. It starts by cleaning the projects and then builds the projects. First building the projects containing the activities and then the projects containing the workflows. Most of the time that worked for me. And you don’t even need to close anything. You can even keep the workflow designer open with the error message showing. After the build it will refresh and if successful the workflow will be shown.

But sometimes that does not work. Then I would have to quit Visual Studio, run my build script and open Visual Studio again.

It’s also very possible that you have a genuine problem or bug. It is pretty easy to create a situation where you end up being unable to open the workflow in the workflow designer and you’ll have to edit the code-behind file by hand.

Now I’m sure that our inheritance aggravates some of the problems as inheritance isn’t officially supported in Windows Workflow (although I heard they might have something for that in WF 4.0, but I haven’t checked that out yet), but I suspect you’ll run into these problems every now and then even if you do not use inheritance.

Saturday, January 31, 2009

Seeing a workflow in the database doesn’t mean it’s persisted

At work we are building a big application that uses Windows Workflow. We are delivering versions to Test at certain milestones so we don’t have to build everything first before we can see how good we built it.

We ran into an interesting issue I want to share. But first let me tell you about some technical details. As I said, we use Windows Workflow to use workflows in our application. The application receives an order (through a webservice), does dome checking and authenticating and then finds the correct workflow to handle the order and start that workflow. The workflows we make inherit from a base class. This base class contains 2 or 3 dependency properties that every workflow should have (such as the order data) and also contains some methods that every workflow can use. We also have a base class for some activities, but these mainly contain methods that most other activities will need. A workflow also needs to implement a certain interface, so we can load all the workflows with our Dependency Injection container and can determine which interface we need for which order.

So far so good, aside from some other problems I want to blog about soon it all seemed to work perfectly. Our first milestone was a couple of simple workflows that started, did some stuff and ended. Nothing special and they worked perfectly. If you used Workflow Monitor to look in the database you’d see the workflow. So to us that meant the workflow was successfully persisted.

Then for the second milestone we created a bunch of more complex workflows. These workflows did a call to a webservice and had to wait on the call-back from that webservice. In workflow this is handled very easily using the ExternalDataExchangeService and the HandleExternalEventActivity. We coded it up and started testing.

When the call-back came in, some stuff happened like validation, and eventually the call-back data came to the part where we’d ask the workflow runtime to give use a pointer to the workflow instance that’s waiting on the data and then use the ExternalDataExchangeService to deliver the data to the workflow so it can go on with it’s process. But instead of just working, the workflow runtime told use that the requested workflow couldn’t be found in the persistence store. We checked with the Workflow Monitor and we saw the workflow. It’s status set to running (or idle). But the runtime insisted it was not there.

Huh??

I did some digging and finally noticed an error in our log files. Some error about an interface not being serializable which caused the persistence to fail. Turns out that what we see in the Workflow Monitor is Workflow Tracking and not Workflow Persistence. In fact, unless you explicitly tell it to do so, workflows will not be persisted. You can either explicitly tell the Workflow Runtime to persist your workflow, or you can use some activity that will cause it to be persisted. Which is what (as far as I know) most activities do that will cause your workflow to be paused. So in our first milestone nothing caused the workflow to be persisted, but in our second milestone we caused the workflow to pause at the HandleExternalEventActivity which meant the workflow had to be persisted, which then failed.

So what was the problem? Well, all fields and get+set properties on a workflow or activity have to be serializable. So it must have the [Serializable] attribute tacked onto its class or interface and onto every class or interface that is inside that class or interface. Since we set up some references to some logic classes (Logging Service, Order Service, etc) in the base class, they would also need to be serializable.

We solved the problem by refactoring all the service fields into read-only properties (‘get’ properties) and not store the reference in the workflow or activity, but rather ask our Dependency Injection container each time we need one through a static method.
We also removed some other classes that where stored as private fields and just instantiate them within a method as they are needed and a few classes were made serializable.

The biggest downside of this is that Windows Workflow demands a very strict versioning scheme, so this means that we now have a few more assemblies that require careful versioning.

Friday, January 16, 2009

Application configuration in the database

Recently K. Scott Allen posted an article called App Configuration and Databases.

His conclusion is:

The database should be one of your last choices as a repository for configuration information. Configuration in code is easily tested, versioned, changed, and refactored. As Jeff said – only a specific requirement should drive you to a more costly alternative like the database.

I humbly disagree. We used to put are configuration into the web.config or our projects. This was easy, but had two down sides:

  1. Whenever we release a new version it’s best our operations guys overwrite to old config, because we might have changed some values in there that they should not mess with. Things like configuration for our DI containers, or WCF settings. They do need to copy the settings from the old config into the new one and that takes time and there is chance of error, especially if you have large config files.
  2. It’s impossible to do “hot-configuration”. If you change the web.config, IIS will restart the web application. This is not a big issue, but did cause some problems once or twice.

As for configuration in code I see the following problems:

  1. We use OTAP (separate environments for Development, Test, User Acceptance and Production), this mean that we’d either have to create different “configuration DLLs” for our different environments or that we’d need to have code that detected what kind of environment the application ran in and use that to determine the correct configuration.
  2. If you’d need a configuration change, you’d need a developer to check-out code, change it, check it back in and deliver the code to test. Test would need to run al kinds of tests and when all done deliver the application to operations for installation. Configuration in a separate file, or database is something the operations guy can do himself.
  3. We, as developers, don’t have access to production machines. Security regulations dictate we should not know passwords to production environments, so we couldn’t even create the configuration DLLs.

Of course these concerns can all be addressed, but we decided it was better to use a database.

  • If the configuration is in a database, the application (and it’s associated web.config file) can be overwritten without problems. (Of course some values would need to be copied, but they would only be a few).
  • We can provide the application with default configuration values and the tester can change them for the test environment. The guy from operations can change them into the production values.
  • If you’re doing load balancing on the application it’s possible to reuse the same configuration database (might not be practical for some systems, but it is a possibility).
  • It’s easy to add a configuration page to an administration web application that we’ll probably need to build anyway.

So all-in-all it seems like the best solution for us (for the moment), but that’s largely because of the way we work. Your company may use other methods which might not make configuration in the database the best choice, but I don’t think it should be the last choice.

Tuesday, January 6, 2009

SOAP logging for webservices

If you wish to log incoming SOAP requests to your ASMX webservice, you can write a SOAP extension that does just that.

You’ll need to create a class that inherits from SoapExtension and you should override the methods GetInitializer, Initialize, ChainStream and ProcessMessage.
It’s not that hard, you can find an working example at the MSDN site: How to: Implement a SOAP Extension. It works for both the server (who accepts incoming SOAP requests) and a client (who sends SOAP requests to servers).

All you need to do is add this to your web.config (or app.config) and it will work:

<system.web>
    <webServices>
        <soapExtensionTypes>
            <add type="MySoapLogger, MyNamespace.SoapLogger" priority="1" group="High" />
        </soapExtensionTypes>
    </webServices>
</system.web>

If you wish to create a SOAP logger for a WCF webservice, that’s both easier and harder. It’s simpler to write the extension for, but now you must write different things for a server and a client. And making it configurable requires some more work then the ASMX version.

In WCF you don’t write a SOAP Extension, but rather a Message Interceptor. And you won’t need to inherit from a base class, but you’ll need to implement certain interfaces to make it work.
To be able to see incoming (server) messages you’ll need to implement IDispatchMessageInspector and for outgoing (client) messages you’ll need to implement IClientMessageInspector. It's worth mentioning that nothing prevents you from implementing these interfaces both on the same class. That's what I did.

Both interfaces require you implement two functions:

object IDispatchMessageInspector.AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext);
void IDispatchMessageInspector.BeforeSendReply(ref Message reply, object correlationState);

And:

object IClientMessageInspector.BeforeSendRequest(ref Message request, IClientChannel channel);
void IClientMessageInspector.AfterReceiveReply(ref Message reply, object correlationState);

For IDispatchMessageInspector you have AfterReceiveRequest for when the request is received, before your normal code will run and BeforeSendReply, which is directly before the response will be send to the requester, after all your normal code has run.
For IClientMessageInspector you have BeforeSendRequest which is right before the request is send to the server, after all your normal code has run and AfterReceiveReply for when the response from the server has been received, before your normal code will run.

Basically you can do anything to the request you want at those points, even alter the data. One of the examples I found was to validate incoming requests to an XSD. Another was to do some translations for backwards compatibility sake. But in this case we just want to log the data.

So what we can do in each of these functions, is the following:

try
{
    // Only try to parse the message if it is not empty.
    if(message.IsEmpty == false)
    {
        string xml = message.ToString();
        XDocument xmlDocument = XDocument.Parse(xml);

        StringBuilder sb = new StringBuilder();
        XmlWriter writer = XmlWriter.Create(sb, new XmlWriterSettings { Encoding = Encoding.ASCII, Indent = true });
        xmlDocument.WriteTo(writer);
        writer.Close();

        sb.Append("\r\n");
        this.soapLogger.Log(sb.ToString());
    }
}
catch(Exception e)
{
    if(System.Diagnostics.Debugger.IsAttached == true)
    {
        System.Diagnostics.Debugger.Break();
    }
}

In this case the "message" variable contains the SOAP message. I wrote one function to do the logging and have it called from the 4 functions mentioned above. Also the "soapLogger" class variable is the logger I use. It's not particulary important which logger this specifically is, but if you want to know, I use Log4NET.

The use of the XDocument and the XmlWriter is absolutely not necessary, but I use them to make the XML look pretty (indented) in the log. It is a lot slower then just directly logging it to the logfile, but that would result in the whole XML message to appear on 1 line.
The stuff in the try/catch clause is a neat trick I use to catch the unexpected exceptions.

If you want the most simple solution, you can also do it like this:

// Only try to parse the message if it is not empty.
if(message.IsEmpty == false)
{
    string xml = message.ToString();
    this.soapLogger.Log(xml);
}