Saturday, May 31, 2008

Clouds for the masses

I've been hearing about "The Cloud" for some time now. Mostly from Microsoft engineers. There is 'Cloud Computing' and 'Cloud Storage'. Sending things to 'The Cloud'.
The cloud in this case is of course the internet. They use the term "cloud" because when you draw a network diagram you usually symbolize the internet with a cloud.

So as a developer I've been hearing about clouds for some time now. And now Valve (makers of Half-Life and Portal) have announced Steam Cloud. The Steam Cloud framework enables games to store settings and saved games on their servers on the internet (in the cloud). Aside from this being totally awesome, it's just funny they chose the 'cloud' name. Of course it fits perfectly with their 'Steam' brand name and it makes sense to most people because steam produces clouds. But to the developer it also makes sense, because it does something in "The Cloud".
I thought it was funny to see a developer-centric term go mainstream without losing its meaning (even though non-developers probably won't give it the same meaning developers do).

Monday, May 26, 2008

An internal API exposed as SOAP is not a webservice

As the system we build at work grows, more other systems are to be connected to our system. So is the case with a system that handles domain registration and hosting products for our smaller customers.

This system is originally a self-contained system. It's a web-application that has all the work-flow, provisioning and billing build in. And now some architects have decided that it should be connected to our system, so the big commercial systems can all offer these products without having to resort to manual steps performed by human operators.

There was resistance. The product managers and designers didn't like that idea. I assume the builders don't like it either, because the development is out-sourced and that would mean less income for that company. So after some political 'nudging' it was accepted that our system should be able to talk to theirs. And they we're going to give us webservices that enabled our system to offer the same products and functionality as their web interface does.

When I got the documentation on the webservice I was unpleasantly surprised. There are a lot of methods. And they're all small methods. They all have a very, small, singular purpose. As far as I can tell, they took the public properties of their Business Logic Layer and their DataAccess Layer and made those all into SOAP methods and called it a nice webservice interface. It's going to be a challenge to get that working smoothly...

Saturday, May 17, 2008

Your reference is not my reference

This week we had a discussion at work. It was about reference IDs. When a system calls the webservice on another system, sometimes you need to specify a reference ID. One such case is with asynchronous webservices. The system does the call, but does not get an immediate answer (other than: "Yeah, we're working on it"). And at some point in the future the system gets a call from the webservice (a callback) to return the result of the webservice. One way (and in my opinion the easiest way) to deal with this is to have the system send a reference ID with the original request and have the webservice put the reference in the asynchronous result. This way the system can correlate the result with the request and process it.

But as we found out today, that's thinking in "functions". Meaning that calling a webservice function and getting it's results is an operation that's completely separate of anything else. Any other thing can occur between calling and receiving a result and after you get the results, if there is a process, that ends that part of the result. But for some calls we need to think in "processes".

Our system is like a webservice bridge. We expose a generic interface to commercial systems and we consume very specific interfaces from technical back-end systems. So when a commercial system sends us a request, we map it into a specific request to a back-end system. This way the commercial system doesn't need to know all the gritty details of the technical system. Also all of our calls are atomic....except for two. Normally the commercial system deals with the process and it has never had any effect on our system. Until we needed to support a certain function that did. Now in my opinion the biggest problem was caused by a faulty design of the technical back-end, but it did expose a pit-trap we could fall in ourselves now that we are designing a new interface for the commercial systems. Sometimes you not only need to know what request was responsible for the incoming result, but also which commercial process it's part of. This is required if you need to perform certain steps in order and the technical back-end system also needs to follow-up on an earlier request with a follow-up request.

So we had a discussion about this. Do we take the reference form the commercial system and pass it on to the technical system. Out-sourcing the generation of references as it were? I was opposed, because I think a reference should only have meaning to the system that generated it. If we use the reference from the commercial system, we give it a meaning it shouldn't have. Also, if some time in the future we add more commercial systems there is a chance (and knowing Murphy it will happen) that more then one commercial system will come up with the same reference and then we have a problem.

My take on the whole situation is that we apparently need both a "function-reference" and a "process-reference". We generate our own function reference and use the process reference of the system that guards the process. This could be our system, in which case we generate the process reference, or it could be a commercial system, in which case we just take the reference and pass it on.

Tuesday, May 6, 2008

Some Ajax/WCF things I learned

I'm reworking a small ad-hoc web application I wrote because like all ad-hoc tools, it got a rather permanent status. It's like they say: "Nothing is as permanent as a temporal solution".
I wrote the first version using ASP.NET MVC. The only reason I did so was because I wanted to experiment with MVC and they told me to create the application in any way I saw fit. So it was a great way to experiment a little. Now the web application will be used on a more permanent basis and we will have to actively maintain it I decided to rewrite it. Also an additional requirement made the whole MVC thing a little harder then I ought to be. So I rewrote the web application, but this time I decided to use a WCF JSON webservice and go all-out Ajax. Still experimental, but less CTP-al.

So my first order of business was to write a WCF webservice that did the things I needed it to do. This was easy. No hurdles there.
Did you know you can put a [DataMember] attribute on a private member? It will even show up in the WCF service while still being private in your code. I used this because I had some data that needed to be exposed in the webservice as a string, but as a different data type internally. So I made a private property that made a string out of that data and put the [DataMember] attribute on it. And then I had a public property without the attribute for use inside the code. Very handy.

Next up after getting the WCF webservice done was consuming it through JavaScript. Nothing really difficult here either. Just Google for JavaScript and WCF and you'll find lots of examples. That's what I did. The only thing I did ran into was that at first I tried to have two separate web projects. One for the WCF webservice and one for the web application. For some reason I never got the JavaScript to work with the WCF webservice. I don't think it's impossible, but after fiddling with the WCF configuration for an hour I gave up and just made it into one project and it worked.

After I got the WCF/JavaScript communication to work I started to work on the JavaScript logic. And in doing so I learned some stuff about JavaScript.

JavaScript isn't really object oriented, but you can fake it.
You can create a new 'object' using the new keyword and a function declaration:

var myObject = new MyObject(){}

If you leave out the 'new' keyword the variable "myObject" would contain a reference to a function named "MyObject". A function that doesn't do anything. But add the 'new' keyword and now it's something like an object.
Unfortunately I have no idea how to proceed any further. It's possible to add properties and methods to the object. but I'm not 100% sure how.Also I had the site working in Firefox and Internet Explorer 8 beta, but when I tried it with Internet Explorer 7 it didn't work. It turns out that doing something like:

var element = document.getElementById("someId"); element.setAttribute("class", "cssClass"); element.setAttribute("onclick", "alert('Yay!');");

Works perfectly in Firefox and IE8, but not in IE7. The annoying part is that if you use the Developer Toolbar in IE7 and inspect the DOM properties of the element they look good. I created a similar element in straight HTML and compared it to the element I added and setup dynamically through JavaScript, but I didn't see any difference. Yet it didn't work.
Turns out you can't do this in IE7 and you need to use the proper properties of the element to set stuff up. So the above needs to be:

var element = document.getElementById("someId"); element.className = "cssClass"; element.onclick = function() { alert('Yay!'); };

Luckily this also works in Firefox and IE8.

More as I learn more...