Sunday, 31 August 2008

ASP.NET and JQuery = powerful combination

Over last few years number of interesting JavaScript frameworks emerged. My reaction to JavaScript used to be quite allergic, for me there was no such thing like a maintainable JavaScript code. But apparently things have changed drastically. At the moment we can choose from many different frameworks:

All of them are pretty mature, well documented and definitely worthwhile.

Recently I was forced to play with JQuery a bit ... believe me, we were trying everything to avoid it but in the end we gave up. Why? Partly because we couldn't replace JQuery with ASP.NET Ajax in a efficient way and partly because after a profound investigation I changed my mind about Javascript. In fact I was having one of those 'Ah-ha!' moments ... I was really amazed how many cool things you can do with just a few lines of code.

I don't pretend to be a JQuery expert but let me show you just a simple example, simple use case which I used as a opportunity to learn more about JQuery.

Okey, let's start ... we want to create a personal contact list. Applications will list all people with a telephone number and some additional details. Important part of this use case is user experience, our aim is to use ajax calls as much as we can in order to avoid reloading the whole page. Let's say that this is how the application will look like:

I have used one of WUFOO's css thames and this is the only reason why this application looks not so bad ... UI and CSS design is not my strong side ;)

Anyway, interesting part now ... after clicking on details, table should expand and we should get some additional information regarding selected person. Expandable table should be implemented using ajax calls. Final view should look like this:


  • Normally, to create application like this, you need to have some way to persist data, it can be anything ... database, xml file, text file or anything else. In this case our data will be hardcoded (check source code) as this post is focused on user interface.
  • Number of controls can be used to generate table like this ... personally I would use repeaters to do the job but other options are also good and again ... that's not the point.

JQuery magic

JavaScript in this example is responsible for:
  • making sure that when 'Details' link is clicked our function will be invoked.
  • getting all required details for selected person with ajax call and showing them to the user.

All of that can be done with this piece of JavaScript:

$(document).ready(function () {
// find our 'Details' links and bind click event with our function
$(".aDetails").click(function (e) {

// we will use this link to get contactID
var link = $(this);

// find parent row
var row = link.parent('td').parent('tr');

// remove focus from the link

// remove selection from other rows

// select this row

// get details - ajax call
url: applicationPath + "/utils/ContactDetails.aspx?contactID=" + link.attr("contactID"),
success: function(html){

In-code comments should give you the idea about what is going on ... if something is still not clear then
  • check how the HTML is structured, it should help you understand what exactly functions like next('tr.detailsPanel') are doing,
  • and of course check JQuery documentation

Why ASP.NET Ajax is not a good alternative?

It's not a big challenge to get the same results in terms of user experience with ASP.NET Ajax, the simplest way is to put control responsible for rendering the table to a UpdatePanel like this:

<asp:UpdatePanel ID="UpdatePanel1" runat="server">
<asp:Repeater ID="Repeater1" runat="server">

It works but in fact each ajax call refreshes whole table ... and in this case it means that almost whole site is refreshed instead of one row. As long as number of records is small it's fine ... but if you have huge number of records it might turn out to be a problem. I haven't found any sensible way to update just one row ... maybe you can suggest something?

I hope that this little application shows that JavaScript is back in the first league. Developers should be familiar with at least one JavaScript framework ... pick the one which you like. Frameworks like JQuery provide many very flexible and powerful features which can help enhance your sites.

My plan is to continue enhancing this application, keep learning and experimenting ... stay tuned as next parts of my adventures with JQuery will show up. Source code for this part can be downloaded from here.

Saturday, 23 August 2008

Scrum - why extending sprint (iteration) length is usually not a good idea

I have to admit -- I'm a big fan of short iterations and I have plenty of reasons for it! But what does short mean? In most cases 2 week iteration is a good start. You should consider shorter iterations only if rule of having at least four, five iterations within a project can be broken, hence any project shorter then 8-10 weeks is a potential candidate. On the other hand, big, 6 months and more, projects should consider 3 week iterations ... but only ... I repeat ONLY ... when you are sure that scrum and in general agile software development process works for you and you are happy with the results.

How can you know that scrum works for your team?

One thing has to be clear -- you don't have to use pure scrum to make it work for you. Sometimes applying only some aspects of agile software development is the best approach and it doesn't make it any less agile. Fundamental thing about being agile is improving the process, so get rid of things that doesn't work, experiment with new ideas and learn!
It's often the case that people feel that there is something wrong but they can't really say what exactly therefore I highly recommend watching this presentation: "10 Ways to Screw Up with Scrum and XP". Also you can find more materials on Henrik Kniberg's blog. Check if you find your problems on this list, check if you follow fundamental scrum rules.

Why short iterations are better?

I stated before that I have plenty of arguments ... so here is a list:

  1. The most important one - short iterations allow you to find problems early, always look for problems and remember about the rule: visible problem = killable problem. If your team is aware of the problem then there is a good chance that they can cope with it. Treat problems like a opportunity for improvement.
  2. With short iterations new ideas and improvements can be applied quicker. After each iteration you should have a retrospective meeting to identify problems and figure out how to improve the process for next iteration. Short iterations mean that you can improve more frequently.
  3. About certain problems you can learn only after full iteration ... for instance common problem is that people don't work as a team ... each person has it's own task and at the beginning of a iteration all stories are in progress. Sounds good as it means that whole team is working and is moving forward but actually it's not good as you can't really say how many user stories will stay in progress to the end of iteration. In worst case all stories are 90% done by the end of iteration which means that from a client perspective this iteration didn't add any new functionality, any releasable code, any new value.
  4. Estimations and velocity - at the beginning of a project how can you provide reliable estimations? How do you know how long it will take to deliver whole system? At the beginning you don't know much about team's performance, you don't know how cooperation between the team and a client will be looking like. So probably you can't provide any reliable estimations ... you can only guess. But first few initial iteration should give you good idea about team's velocity which should result in much more reliable estimations ... now you can figure out what should be done in next two, five or ten months. Sounds like a useful thing?

Are you convinced? I hope you are ... if not ... then challenge me and post your arguments! Maybe you can persuade me that I'm wrong :)

kick it on

Sunday, 17 August 2008

EPiServer, MultipageProperty -- don't use SelectedPages property!

I don't know how it works for you, but I can't imagine life without MulipageProperty. I use it in most of our projects and that is great because I love flexibility which it offers. Recently while checking MulitpageProperty source code I found something worrying -- if you check how PropertyMultiPage.SelectedPages property is implemented then you will find that for each internal page following method is invoked:

private PageData GetInternalPage(PageData pd, MultipageLinkItem multipageLinkItem)
//make sure pagedata object is writeable
pd = pd.CreateWritableClone();

// We must have detected a change, and have a link text
if (multipageLinkItem.EditorHasChangedLinkText == true &&
string.IsNullOrEmpty(multipageLinkItem.LinkText) == false &&
multipageLinkItem.LinkText.Trim().Length > 0)
AddPropertyHelper(pd, "PageName", new PropertyString(multipageLinkItem.LinkText));
// We should also be able to test if the link text has been changed
// from the outside, so we add a flag that indicates that the PageName
// has changed
AddPropertyHelper(pd, "LinkTextHasChanged", new PropertyBoolean(true));

PropertyFrame frameProp = new PropertyFrame();
frameProp.FrameName = multipageLinkItem.Target;
AddPropertyHelper(pd, "PageTargetFrame", frameProp);

AddPropertyHelper(pd, "PageLinkToolTip", new PropertyString(multipageLinkItem.Tooltip));

// It needs to have published status, or it may be filtered away
AddPropertyHelper(pd, "PageWorkStatus", new PropertyNumber((int)VersionStatus.Published));

// It should look like something we just fetched
pd.IsModified = false;

return pd;

What is the problem with that method? Problem is here:

pd = pd.CreateWritableClone();

Do we really need to create writable version of PageData for each page? Well, as you can see in above code, page is "extended" with additional properties, which you may know from MulipageProperty's dialog. Probably, without creating writable version of page, "extending" is not possible. It's of course quite cool feature but you have to be aware that there are performance consequences of calling CreateWritableClone() method.

So what should I do?

I'm far from saying that we should rewrite this piece of functionality or stop using MulitpageProperty at all ... that would be crazy!
But I would like to suggest using SelectedLinkItems instead of SelectedPages property. PropertyMultiPage.SelectedLinkItems property returns MultipageLinkItemCollection object, single MultipageLinkItem object contains all necessary page related data including URL, link target, link text etc.

Take a look on PropertyMultiPage.SelectedPages property implementation to see how you can use MultipageLinkItem object to get everything what you need:

PageDataCollection SelectedPages

// All link items
MultipageLinkItemCollection linkItems = this.SelectedLinkItems;

foreach (MultipageLinkItem multipageLinkItem in linkItems)

PageReference pageref = PageReference.ParseUrl(multipageLinkItem.Url);
PageData page = null;

// ParseUrl also work for fully qualified urls (http://...) which
// we will never have for our own pages. To qualify as an internal
// EPiServer page, the parse must be successful, and the url must
// start with "/". If we cannot load these pages, they will be
// removed from the collection altogether.
if ( ! PageReference.IsNullOrEmpty(pageref) && multipageLinkItem.Url.StartsWith("/"))
// get the page with error handling for
// access denied or deleted page
// Could be language sensitive
if (EPiServer.Configuration.Settings.Instance.UIShowGlobalizationUserInterface)
// First we check if we have a specific language to load
if (string.IsNullOrEmpty(multipageLinkItem.LanguageId) == false)
// Load page, with specific language
page = EPiServer.DataFactory.Instance.GetPage(
pageref, new LanguageSelector(multipageLinkItem.LanguageId));
// Load page, with master language fallback
page = EPiServer.DataFactory.Instance.GetPage(
pageref, LanguageSelector.AutoDetect(true /* enableMasterLanguageFallback */));
page = DataFactory.Instance.GetPage(pageref);

catch (PageNotFoundException notFoundEx)

return _selectedPages;
_selectedPages = value;

I hope that it gives you a good idea how to use MultipageLinkItem class and how to improve performance of your applications. Maybe that will also inspire someone to spend some spare time tweaking PropertyMultiPage.SelectedPages implementation to not use CreateWritableClone() method.

Sunday, 10 August 2008

Mary Poppendieck -- The role of leadership in software development

Recently I keep finding lots of interesting stuff about team management. This time talk of Mary Poppendieck “The role of leadership In software development” came to my attention. I found it on Google's Tech Talks Channel. But what is it all about?

When you look around, there are a lot of leaders recommended for software development. We have the functional manager and the project manager, the scrum master and the black belt, the product owner and the customer-on-site, the technical leader and the architect, the product manager and the chief engineer.

Clearly that's too many leaders. So how many leaders should there be, what should they do, what shouldn't they do, and what skills do they need?

This will be a presentation and discussion of leadership roles in software development -- what works, what doesn't and why.

For me that introduction was interesting enough to spend 1 hour and 30 minutes watching the talk and after all I have to admit that it was absolutely worth it therefore I recommend you doing the same!

My main take-aways are:

  • general overview how the concept of leadership was evolving staring from 1850 to present times. I was surprised how closely it was connected with army and how many important breakthroughs were triggered by wars.
  • what really makes organisations work is not one standardized process and people that do exactly what is written down. In that model you can forget that people will be interested in process improvement, it's impossible to make a full use of people potential. Basically, it doesn't work!
  • leader is a person with vision, leader's job is to communicated the vision, help the team members to understand it. Leader should act like a teacher, it's not his job to tell people what to do, his job is to tell people how things should work.
And finally, Mary was talking about three different kinds of leadership, here is a high level overview:
  • Product leader -- it's a person which merges marketing knowledge (understands customers needs) and fairly high level technical expertise. Product leader should work closely with developers and is responsible for releases planing and making necessary tradeoffs.
  • Functional leader -- this person should preserve knowledge and hold technical expertise leadership. Person like that should be responsible for solving the most difficult problems in all projects. Other important part of this role is to train people, help them getting better and grow to their full potential.
  • Project leader -- funding, scheduling and tracking -- those are the main objectives for this role.
Those are the things which were particular interesting for me. Watch the talk and if you care to share then let me know what aspects were interesting for you.

Wednesday, 6 August 2008

ASP.NET Web Application debugging and timeouts

While developing web applications it's absolutely normal that at some point it's necessary to debug a code to check variable's value, execution flow for some weird input data and so on. Before running application in debug mode Visual Studio will prompt you with a following dialog:

For developers answer is simple ... of course that we want to enable debugging! So you click OK button and everything works fine. But do you know where is it saved and what are the implications? If you are not sure then probably you should read this post carefully ;)

<compilation debug="false"/>

This part of web.config determines if debugging is enabled or not. Also this part will be changed if you click OK button. What are the implications beside the fact that you can debug the code? Well, literally, consequences are extremely HUGE, here comes a short list:

  1. The compilation of ASP.NET pages takes longer (since some batch optimizations are disabled)
  2. Code can execute slower (since some additional debug paths are enabled)
  3. Much more memory is used within the application at runtime
  4. Scripts and images downloaded from the WebResources.axd handler are not cached

Moreover, timeouts are turned off, which is necessary to debug the code but can be potentially very dangerous on production sever.

I don't want to write yet another post about debug="true" attribute, production servers and performance impact as there are already lots of really good posts about it, check at least those two:

But one additional thing is worth pointing out ... even if you forget about the debug="true" issue then you can do something on production servers to make sure that web.config settings will be ignored:

in machine.config file set:

<deployment retail="true"/>

You will disable the debug="true" switch, disable the ability to output trace output in a page, and turn off the ability to show detailed error messages remotely. Note that these last two items are security best practices you really want to follow (otherwise hackers can learn a lot more about the internals of your application than you should show them).

Debug mode disabled and timeouts

In the optimistic version, when debug is turned off you can control your application timeouts with this attribute:

<httpruntime executiontimeout="40">

executionTimeout specifies the maximum number of seconds that a request is allowed to execute before being automatically shut down by ASP.NET.

But remember that:

This time-out applies only if the debug attribute in the compilation element is False.

I hope that this post made those things clear for you, check your configuration and make sure that web.config settings are not slowing your application down.

Tuesday, 5 August 2008

Power of the Retrospective

After over a year developing a number of EPiServer projects we finally managed to get everyone in one room and do the retrospective. It was great to realize that actually EPiServer team is not so small anymore and that the people did lots of cool stuff over a last few months.

But first things first ... what is a retrospective? It's a special meeting which is held to discus the previous iterations and during which people should try to find ways to improve process in future. Retrospective meetings are actually quite fundamental part of agile software development process which should evolve and improve with each iteration.

We did quite high level review of all our EPiServer projects and it was very positive that people didn't complain much about EPiServer itself. Of course, we had to deal with a number of issues (like performance) but in the end we managed to overcome all the problems. Not surprisingly the most difficult part was communication between clients and developers. Usually when people say something like this, they think about communicating requirements to the developers ... and they are right, but ... in many cases problems were also on our side. We haven't done the best job in terms of showing the client how to use our page types, custom properties and so on. That's something which we definitely need to improve. It's a challenge to document the page types in a way which is friendly to the end user. (How do you deal with this problem in your projects?)

Thanks to this meeting we were able to identify top priority problems which need to be solved but also it was a great opportunity to share knowledge. It's literally impossible to be up-to-date with all the ongoing projects within the Cognifide therefore it's quite often the case that some really cool stuff implemented in one project can be used in another one but people simply don't know about it. Retrospective meetings are the key to raise people awareness about 'already solved problems' and warm up internal communication.