Tuesday 24 February 2009

EPiServer -- Deleted Pages


I was asked recently a few times how to detect deleted pages. Deleted from editors perspective, which means -- moved to the Recycle Bin. Answer to this question is really simple, PageData class has a property called IsDeleted, here is an example:

   1:  public static bool IsPageDeleted(PageReference pageRef)
   2:  {
   3:      PageData page = DataFactory.Instance.GetPage(pageRef);
   4:      return page.IsDeleted;
   5:  }

In fact, that is all what is needed to check if page was deleted. Below I would like to list additionally a few related tips which might be useful:
  • You can get reference to the Recycle Bin thanks to this static property:

       PageReference.WasteBasket
  • Another way to check if instance of PageReference class points to the Recycle Bin is to call this method:

       DataFactory.Instance.IsWastebasket(new PageReference(12))
  • If you would like to move a page to the Recycle Bin programmatically then you don't really want to delete the page, you should use this method instead:

       DataFactory.Instance.MoveToWastebasket(new PageReference(121));
  • EPiServer has by default scheduled job called "Automatic Emptying of Recycle Bin" which is responsible for:
    With Automatic Emptying of Recycle Bin, you can set how often your Recycle Bin should be emptied. The aim of this function is to stop old information from being left in the Recycle Bin for a long period of time. With automatic emptying, all information that is older than 30 days will be deleted from the Recycle Bin.

    So what can be wrong when pages from your Recycle Bin don't get removed? The most likely option is that this scheduled job is not activated, make sure that checkbox Active is checked ;)
If you need further details or some relevant information is missing then feel free to leave a comment.

Other interesting posts:

Wednesday 18 February 2009

Google Trends

Have you ever wanted to check how popular are certain key words in search engines? Or maybe how busy are popular websites? Now it's all possible with Google Trends.

For instance you can check number of daily unique visitors for popular websites like Twitter and Digg and additionally, you can compare them on a single chart:

Clearly you can see that Twitter is growing whereas Digg has some problems with keeping the levels.

If you check Search Volume Index then you will get the same conclusion:

On this graph you don't see actual traffic numbers ...
The numbers you see on the y-axis of the Search Volume Index aren't absolute search traffic numbers. Instead, Trends scales the first term you've entered so that its average search traffic in the chosen time period is 1.0; subsequent terms are then scaled relative to the first term. Note that all numbers are relative to total traffic.
It's not my intention here to show you that Twitter is trendy, I want to give you an idea how interesting information you can find with Google Trends. You can now easily check how popular different ideas/technologies/products/politicians are. Try to see for instance how rapidly is growing number of searches for ASP.NET MVC - it's growing really fast. (Actually, thanks to an article Interest in ASP.NET MVC is raising I have learned about Google Trends)

Remember though, Google Trends will give you only estimated values, those are not accurate data:
It's important to keep in mind that all results from Trends for Websites are estimated. Moreover, the data is updated periodically, so recent changes in traffic data may not be reflected. Finally, keep in mind that Trends for Websites is a Google Labs product, so it's still in its early stages of development and may therefore contain some inaccuracies.
In opinion ... even though the data are estimated and not all websites are included ... it's still an awesome tool!

Tuesday 17 February 2009

FluentConfiguration -- New API to configure NHibernate

Fluent NHibernate from the very beginning provides really clean API to configure NHibernate. I didn't expect to see any changes in this area ... but yet new "fluent" way to configure NHibernate has been introduced.

This is the way I was using so far (it still works well):

   1:  private static Configuration GetNHibernateConfig()
   2:  {
   3:      return MsSqlConfiguration.MsSql2005
   4:          .ConnectionString(c => c.Is(@"Data Source=db_server;Database=db_name;...."))
   5:          .UseReflectionOptimizer()
   6:          .ShowSql()
   7:          .ConfigureProperties(new Configuration());
   8:  }
   9:  
  10:  public static ISessionFactory GetSessionFactory()
  11:  {
  12:      // configure nhibernate
  13:      Configuration config = GetNHibernateConfig();
  14:  
  15:      var models = new PersistenceModel();
  16:      
  17:      // alter default conventions if necessary
  18:      SetUpConvention(models.Conventions);
  19:  
  20:      models.addMappingsFromAssembly(typeof (Product).Assembly);
  21:      models.Configure(config);
  22:  
  23:      // save xml files with mappings to some random location
  24:      models.WriteMappingsTo(@"c:\dev\mappings");
  25:  
  26:      // build factory
  27:      return config.BuildSessionFactory();
  28:  }
  29:  

Honestly I didn't expect that readability can be improved much ... but check this:

   1:  public static ISessionFactory GetFluentlyConfiguredSessionFactory()
   2:  {
   3:      return Fluently.Configure()
   4:          .Database(MsSqlConfiguration
   5:                        .MsSql2005
   6:                        .ConnectionString(c => c.Is(@"Data Source=db_server;Database=db_name;....")))
   7:  
   8:          .Mappings(m =>
   9:                    m.FluentMappings.AddFromAssemblyOf<Product>()
  10:                        .ConventionDiscovery.Add(new AdventureWorksConvention())
  11:                        .ExportTo(@"c:\dev\mappings"))
  12:  
  13:          .BuildSessionFactory();
  14:  }

What I really like about Fluent NHibernate is that it doesn't force user to take all or nothing. If you want you can easily use combine different types of mappings. For instance you can add fluent-nh to your existing project, reuse old mappings (XML files) and add new mappings configured with fluent API. Here is an example:

   1:  public static ISessionFactory GetFluentlyConfiguredSessionFactoryWithHbmFiles()
   2:  {
   3:      return Fluently.Configure()
   4:          .Database(MsSqlConfiguration
   5:                        .MsSql2005
   6:                        .ConnectionString(c =>
   7:                                          c.Is(@"Data Source=db_server;Database=db_name;....")))
   8:  
   9:          .Mappings(m =>
  10:                        {
  11:                            m.FluentMappings.AddFromAssemblyOf<Product>()
  12:                                .ConventionDiscovery.Add(new AdventureWorksConvention())
  13:                                .ExportTo(@"c:\dev\mappings");
  14:                            m.HbmMappings.AddFromAssemblyOf<Product>();
  15:                        })
  16:  
  17:          .BuildSessionFactory();
  18:  }
In a similar way it is possible to combine standard fluent-nh mappings with XML files and auto mapping.

I didn't include in this post anything about conventions to keep things short and concise but you can find details in this post.

Related posts:

Monday 16 February 2009

EPiServer - Outgoing Links

In this post I will show how to get list of all referenced pages (and files) for any EPiServer page. Although it sounds like a trivial task, in fact, it's not so obvious. First of all it's necessary to realize that there are two major groups of "linking" properties:
  • Properties that derive from PropertyPageReference, internally they store link as a page id. Out of the box there in only one property type in EPiServer which uses this class -- PageReference.
  • And the bunch of properties which use permanent links internally like:
    • PropertyImageUrl - Url to image
    • PropertyDocumentUrl - Url to document
    • PropertyUrl - URL to page/external address
    • PropertyXhtmlString - Xhtml Long String
    • PropertyLinkCollection - Link Collection
It's fairly simple to get referenced page from PropertyPageReference:

   1:  var pageReference = CurrentPage.Property["propert_name"] as PropertyPageReference;
   2:  var page = DataFactory.Instance.GetPage(pageReference.PageLink);

What about other property types? There is a one common thing for them -- they all implement IReferenceMap interface:


We can use following code to get outgoing links:

   1:  var referenceMap = property as IReferenceMap;
   2:  if (referenceMap != null)
   3:  {
   4:      IList<Guid> linkIds = referenceMap.ReferencedPermanentLinkIds;
   5:      foreach (Guid guid in linkIds)
   6:      {
   7:          PermanentLinkMap map = PermanentLinkMapStore.Find(guid);
   8:          
   9:          // mappedUrl example: /Templates/Public/Pages/NewsItem.aspx?id=30
  10:          string mappedUrl = map.MappedUrl.ToString();
  11:  
  12:          // and get friendly URL version using UrlRewriteProvider
  13:          var url = new UrlBuilder(mappedUrl);
  14:          EPiServer.Global.UrlRewriteProvider.ConvertToExternal(url, null, System.Text.Encoding.UTF8);
  15:          string friendlyUrl = UriSupport.AbsoluteUrlBySettings(url.ToString());
  16:      }
  17:  }

What are permanent links?

Internal URL's in EPiServer are stored in the database using a format called Permanent Links. Property types are responsible to transform a URL from a permanent link to a standard template link upon access from user code, and of course the other way around before content is stored to the database.
It's a very useful feature of EPiServer because it enables you to manipulate files and pages without risk that some links will get broken.
You can rename files and templates without affecting the links; you can even move an EPiServer site from a virtual directory to a root site without breaking a single link.

Permanent Links and EPiserver's API

IReferenceMap Interface exposes ReferencedPermanentLinkIds property thanks to which we have access to all links stored internally. That is very convenient especially for properties like PropertyXhtmlString which usually also store lots of other data. It is worth noticing that PropertyLongString doesn't implement this interface, hence it doesn't use permanent links. That is a reason why PropertyXhtmlString is recommended over PropertyLongString.

PermanentLinkMapStore class is a part of EPiServer's API for permanent links. I used this class to get mapped URL based on link's Guid. In next step mapped URL can be converted to friendly URL (code based on Ted Nyberg's post). Permanent link can be broken (referenced page was deleted) in which case PermanentLinkStore.Find() method will return null.

Based on above code I have created an edit mode plugin which lists all outgoing links, it looks like this:

Source code can be downloaded from here.

Other interesting posts:

Saturday 14 February 2009

Basic Software Estimation Concepts

In one of my recent posts I was writing that single point estimates are meaningless. In this post I would like to carry on with this topic and talk about a few other fundamental concepts for software estimation based on Steve McConnell's "Software Estimation: Demystifying the Black Art".

One of the most important things is to know the difference between estimates, targets and commitments.
While a target is a description of a desirable business objective, a commitment is a promise to deliver defined functionality at a specific level of quality by a certain date. A commitment can be the same as the estimate, or it can be more aggressive or more conservative than the estimate. In other words, do not assume that the commitment has to be the same as the estimate; it doesn't.
It's quite typical situation when developers are asked to estimate new project or piece of functionality for which deadline is already set. You have to know if you are really asked to provide estimates or to figure our how to meet a deadline. Those are two totally different things. Estimation should be unbiased therefore deadline doesn't matter. If deadline matters then you are, in fact, asked to provide a plan in which goal is to deliver before the deadline.

Single point estimates are meaningless
, estimations should always be represented as a range -- the best and the worst scenario. Don't estimate only at the beginning of a project. At every stage estimates can be useful and they can show that project goal is in danger.
Once we make an estimate and, on the basis of that estimate, make a commitment to deliver functionality and quality by a particular date, then we control the project to meet the target. Typical project control activities include removing non-critical requirements, redefining requirements, replacing less-experienced staff with more-experienced staff, and so on
Controlling the project include dealing with changing requirements. But with new requirements estimations also change and after a few iterations your target is to deliver something radically different then it was estimated at the very beginning. How can you say then if initial estimates were accurate?

In practice, if we deliver a project with about the level of functionality intended, using about the level of resources planned, in about the time frame targeted, then we typically say that the project "met its estimates," despite all the analytical impurities implicit in that statement.

If it's well known that assumptions will change, functionality will change then what is the real purpose of estimates?

The primary purpose of software estimation is not to predict a project's outcome; it is to determine whether a project's targets are realistic enough to allow the project to be controlled to meet them.

Important implication is that gap between estimates and actual times has to be small enough to be manageable. According to book, 20% is the limit which can be controlled.

Estimates don't need to be perfectly accurate as much as they need to be useful. When we have the combination of accurate estimates, good target setting, and good planning and control, we can end up with project results that are close to the "estimates."

That takes us to a definition of "good estimate"
A good estimate is an estimate that provides a clear enough view of the project reality to allow the project leadership to make good decisions about how to control the project to hit its targets.
All of that and much more can be found in the book, it's worth reading.

Thursday 12 February 2009

EPiServer 5 R2 and Link Collection property

With EPiServer 5 R2 new property type was released -- Link Collection. It looks like a EPiServer's version of very popular Mulitipage property. In this post I would like to show you exactly how it can be used and also what are the pros and cons.

After adding a property of this type to a page you will see in edit mode this:


And with a few links added property looks like this:


This is first significant change comparing to old Multipage property (MP) -- list of all links is visible on the page. With old MP it was necessary to click on the button to get a popup with a list of links. That is a good change!

What is missing here for me is a ability to test links. Text which you can see for the first item on the list is not necessary a page name (it might be a clickable text) so it's impossible to figure out from this view what page is referenced.

Funny thing is that title for this link has a following form:

It is very useful isn't it? ;) I think the simplest solution would be to make link text clickable.

After clicking on 'Add Link' or 'Edit' button you will get old popup:


There are no surprises here, it's an old well-know dialog.

Lets check now how to deal with Link collection in a code. It's quite common to use Repeater to display links:

   1:  <asp:Repeater ID="rptRelatedLinks" runat="server">
   2:      <HeaderTemplate><dl></HeaderTemplate>
   3:      <ItemTemplate><dt><asp:HyperLink runat="server" ID="hplMainLink" /></dt></ItemTemplate>
   4:      <FooterTemplate></dl></FooterTemplate>
   5:  </asp:Repeater>

And here is a code to get links from the CurrentPage:

   1:  PropertyLinkCollection links = (PropertyLinkCollection) CurrentPage.Property["RelatedLinks"];
   2:  rptRelatedLinks.DataSource = links;
   3:  rptRelatedLinks.DataBind();

And a method populating the Repeater:

   1:  void rptRelatedLinks_ItemDataBound(object sender, RepeaterItemEventArgs e)
   2:  {
   3:      if (e.Item.ItemType == ListItemType.Item || e.Item.ItemType == ListItemType.AlternatingItem)
   4:      {
   5:          LinkItem linkItem = (LinkItem) e.Item.DataItem;
   6:          HyperLink link = (HyperLink) e.Item.FindControl("hplMainLink");
   7:  
   8:          // mapped link has a form like:
   9:          // <a href="/Templates/Public/Pages/Page.aspx?id=16&epslanguage=en" target="_blank" 
  10:          //       title="this is link title">Information about the meeting</a>
  11:          string mappedLink = linkItem.ToMappedLink();
  12:          
  13:          // permanent link form:
  14:          // <a href="~/link/bb6aa3227f8f467bbe1a42154cb56ba5.aspx" target="_blank" 
  15:          //           title="this is link title">Information about the meeting</a>
  16:          string permanentLink = linkItem.ToPermanentLink();
  17:  
  18:          // because Href property will return permanent link like 
  19:          // ~/link/bb6aa3227f8f467bbe1a42154cb56ba5.aspx
  20:          //
  21:          // it's necessary to use PermanentLinkMapStore.ToMapped(url) to covert it to normal form
  22:          // result is required to determine if conversion was successful
  23:          // it will fail for mails (mailto:test@test.com), documents and external links
  24:          UrlBuilder url = new UrlBuilder(linkItem.Href);
  25:          bool result = PermanentLinkMapStore.ToMapped(url);
  26:  
  27:          link.NavigateUrl = result ? url.ToString() : linkItem.Href;
  28:          link.Text = linkItem.Text;
  29:          link.ToolTip = linkItem.Title;
  30:          link.Target = linkItem.Target;
  31:      }
  32:  }

The basic problem is that Href property returns permanent link, therefore it's necessary to use PermanentLinkMapStore class to convert the links. ToMappedLink() method returns a full "a" tag, which might be convenient in some cases. Take a look on all properties again:

My overall impression is positive, new property Link collection is easy to use but for sure there are things which could be improved like ability to test a link or ability to define page root to look for pages to include. It's a hassle to always start from the very top!

I wonder now if there is still a reason to use old Multipage property, what do you think?

Related posts:

Sunday 8 February 2009

That's what I call a comfortable office

You guys know Joel Spolsky right? At the moment he is a CEO at Fog Creek Software, small company in Manhattan. I'm really impressed with their new office, take a look on a slideshow.

Joel was writing a few times that his main goal is to provide the best possible environment for software developers and this way gain the highest productivity:

Building great office space for software developers serves two purposes: increased productivity, and increased recruiting pull. Private offices with doors that close prevent programmers from interruptions allowing them to concentrate on code without being forced to stop and listen to every interesting conversation in the room. And the nice offices wow our job candidates, making it easier for us to attract, hire, and retain the great developers we need to make software profitably. It’s worth it, especially in a world where so many software jobs provide only the most rudimentary and depressing cubicle farms.
I know exactly how important that is as I'm lucky to work for company having the same goal. But still, there are things I feel jealous about after checking the slideshow. Beside the awesome design I especially like 30“ monitors (I had a chance to work a bit on 22“ monitors and I'm 100% convinced that it makes a difference) or long desks (huge monitors require huge desks ;) ). I'm sure that one day I will talk my boss into buying such a nice monitors for us ;)


Here you can find Joel's post about new office, and an article about the office in The New York Times.

Friday 6 February 2009

Fluent NHibernate and Collections Mapping

You can find some bits and pieces about mapping collections with NHibernate in many different places but yet I decided to write another post about it. What is different about my post? I hope to gather here all (in one place) relevant information regarding the most common mappings: many-to-one/one-to-many and many-to-many. In my examples I'm useing Fluent NHibernate API but also XML counterpart are included. All examples are based on the following schema: (subset of AdventureWorks database)


Bidirectional many-to-one/one-to-many

This is the most common type of association, I have used Product and ProductReview tables to depict how it works and how it can be mapped.

In our case each product (one) can have multiple reviews (many). On ProductReview side association can be mapped like this:

   References(x => x.Product, "ProductID").FetchType.Join();
which is equal to:

   <many-to-one fetch="join" name="Product" column="ProductID" />
What is FetchType doing? Basically FetchType can be Select or Join, we can define how we want NHibernate to load product for us, consider this code:

   1:  var review = session.Get<ProductReview>("1");
   2:  var productName = review.Product.Name;    

For FetchType.Join() NHibernate will call database once with following query:

   SELECT ... FROM ProductReview pr left outer join Product p on pr.ProductID=p.ProductID WHERE ... 
As you can see review and product are loaded with one call. For FetchType.Select() we will get two calls:

   SELECT ... FROM ProductReview pr WHERE ... 
   SELECT ... FROM Product p WHERE ... 
Second call will be executed on demand, it means, only if we try to use Product object like in above example: var productName = review.Product.Name;

In general you have to determine in each case which FetchType is more beneficial for you.

Now, lets check Product side, this is many side of one-to-many association so Product has a collection of reviews, I have chosen to use ISet:

   1:  HasMany(x => x.ProductReview)
   2:      .KeyColumnNames.Add("ProductID")
   3:      .AsSet()
   4:      .Inverse()
   5:      .Cascade.All();
corresponding XML mapping:

   1:      <set name="ProductReview" inverse="true" cascade="all">
   2:        <key column="ProductID" />
   3:        <one-to-many class="AdventureWorksPlayground.Domain.Production.ProductReview, AdventureWorksPlayground, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
   4:      </set>
Here you can find two confusing things:
  • inverse="true" - it tells NHibernate that other side of this association is a parent. I know that it sounds other way round but that's how it is. ProductReview table has foreign key (and ProductID column) therefore ProductReview controls the association.
    What are the implications? In above example review.Product has to be set correctly, as this the property which NHibernate will check to figure our what product is associated with the review. It will ignore collection of reviews on product!
  • cascade="all" - it tells NHibernate that all events (like save, update, delete) should be propagated down. Calling session.SaveOrUpdate(product) will save (or update) the product itself but also the same event will be applied to all depending objects.
We are almost ready to move to many-to-many associations, but before we do that, check this piece of code:

   1:  var product = new Product
   2:                    {
   3:                        Name = "Bike",
   4:                        SellStartDate = DateTime.Today
   5:                    };
   6:  
   7:  product.ProductReview.Add(new ProductReview
   8:                                {
   9:                                    Product = product,
  10:                                    Rating = 4,
  11:                                    ReviewerName = "Bob",
  12:                                    ReviewDate = DateTime.Today
  13:                                });
  14:  
  15:  product.ProductReview.Add(new ProductReview
  16:                                {
  17:                                    Product = product,
  18:                                    Rating = 2,
  19:                                    ReviewerName = "John",
  20:                                    ReviewDate = DateTime.Today
  21:                                });
  22:  
  23:  
  24:  session.SaveOrUpdate(product);

Can you see a potential problem here? Each ProductReview knows about its Product, and thanks to cascade="all" everything is configured correctly but still you may end up with just one review in database ... why? I'm using ISet here, so it guarantees that I have only unique objects in the collection. Most of the people know that NHibernate classes should have Equals() and GetHashCode() methods overridden. It is useful when you want to check that two objects represent the same row in a database. People use primary id column in Equals() implementation, primary id is unique so it fits perfectly isn't it? It does if primary key is defined, and in above example, objects are saved in a last line, before that, they don't have any primary id. That is a reason for using different data to determine equality.

Bidirectional many-to-many association

For this association I have used Product, ProductProductPhoto (link) and ProductPhoto tables. Each product can have multiple photos, but each photo can also be associated with multiple products. ProductProductPhoto is just a link table and doesn't have any representation as a separate class. On both sides mapping looks very similarly.

Product side:

   1:  HasManyToMany(x => x.Photos)
   2:      .AsBag()
   3:      .WithTableName("Production.ProductProductPhoto")
   4:      .WithParentKeyColumn("ProductID")
   5:      .WithChildKeyColumn("ProductPhotoID")
   6:      .Cascade.All();
which produces XML like this:

   1:  <bag name="Photos" cascade="all" table="Production.ProductProductPhoto">
   2:        <key column="ProductID" />
   3:        <many-to-many column="ProductPhotoID" class="AdventureWorksPlayground.Domain.Production.ProductPhoto, AdventureWorksPlayground, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
   4:  </bag>
and ProductPhoto side:

   1:  HasManyToMany(x => x.Products)
   2:      .AsBag()
   3:      .WithTableName("Production.ProductProductPhoto")
   4:      .WithParentKeyColumn("ProductPhotoID")
   5:      .WithChildKeyColumn("ProductID")
   6:      .Inverse();
XML:

   1:  <bag name="Products" inverse="true" table="Production.ProductProductPhoto">
   2:        <key column="ProductPhotoID" />
   3:        <many-to-many column="ProductID" class="AdventureWorksPlayground.Domain.Production.Product, AdventureWorksPlayground, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
   4:  </bag>
In efect Product has a collection of Photos (IList<ProductPhoto> Photos) and ProductPhoto has a collection of products. It's mandatory, in cases like this, to mark one side as inverse="true".

This is fairly straightforward example but unfortunately not very common. In typical case link table has some additional data (like ProductDocument which has ModifiedDate column) and those additional data forces us to use different approach. Among NHibernate best practices you can find general guideline:

Good usecases for a real many-to-many associations are rare. Most of the time you need additional information stored in the "link table". In this case, it is much better to use two one-to-many associations to an intermediate link class. In fact, we think that most associations are one-to-many and many-to-one, you should be careful when using any other association style and ask yourself if it is really necessary.
So, in fact, for tables Product, Document and ProductDocument, we have to create three classes and three mappings. Both Product and Document have a link to each other through ProductDocument object. Interesting part is ProductDocument which has composite primary id (two columns) which can be mapped in a following way:

   1:      public class ProductDocumentMap : ClassMap<ProductDocument>
   2:      {
   3:          public ProductDocumentMap()
   4:          {
   5:              UseCompositeId()
   6:                  .WithKeyReference(x => x.Product, "ProductID")
   7:                  .WithKeyReference(x => x.Document, "DocumentID");
   8:  
   9:              Map(x => x.ModifiedDate).Not.Nullable();
  10:          }
  11:      }
and it generates XML like this:

   1:    <class name="ProductDocument" table="Production.ProductDocument" xmlns="urn:nhibernate-mapping-2.2">
   2:      <composite-id>
   3:        <key-many-to-one class="AdventureWorksPlayground.Domain.Production.Product, AdventureWorksPlayground, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Product" column="ProductID" />
   4:        <key-many-to-one class="AdventureWorksPlayground.Domain.Production.Document, AdventureWorksPlayground, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Document" column="DocumentID" />
   5:      </composite-id>
   6:      <property name="ModifiedDate" column="ModifiedDate" not-null="true" type="DateTime">
   7:        <column name="ModifiedDate" />
   8:      </property>
   9:    </class>
Then we can write code like this:

   1:      var product = CreateNewProduct();
   2:      var photo1 = CreateNewPhoto();
   3:      var photo2 = CreateNewPhoto();
   4:  
   5:      product.Photos.Add(photo1);
   6:      product.Photos.Add(photo2);
   7:  
   8:      // we don't have to save photos because of Cascade.SaveUpdate()
   9:      // INSERT INTO [Production.Product]
  10:      // INSERT INTO [Production.ProductPhoto]
  11:      // INSERT INTO [Production.ProductPhoto]
  12:      // INSERT INTO [Production.ProductProductPhoto]
  13:      // INSERT INTO [Production.ProductProductPhoto]
  14:      session.SaveOrUpdate(product);
And that is all what I think is important in this subject ... anything missing? Leave a comment and I will try to add missing parts.

Related posts

Tuesday 3 February 2009

One Blog - One Major Topic

What makes any blog popular and successful?

High quality posts, interesting comments, frequent updates ... those are obvious qualities which finally, with time, will pay off. But is it that enough to attract people to subscribe your blog? One good post is usually not enough to convince people to your blog - you need more then that and for some blogs that is a problem. People keep posting about way too many different topics on one blog. What I found surprising is that, as you can find in State of the Blogosphere, the average number of topics blogged about is five.

Is five topics per blog to much?

If topics are related to each other then it is fine ... if your blog is about more general things like building web applications then it makes sense to write about JQuery, Domain Driven Design and NHibernate. Those are different topics but all related to building web applications. Personally, I like blogs talking about wide spectrum of things, it is easier to see the big picture. Problem with very specialized blogs is that you have a feeling that you read a product(s) specification ... it can be useful on many occasions, it can help you to solve some problems but it's no longer fun and it's not though-provoking.

On the other hand, if you find a blog which is a total mix of random, completely unrelated stuff like ASP.NET MVC and pictures from someone's trip to Africa then you start to think if filtering out those less interesting post is worth your time. Two or three unrelated topics might be enough to forget about the blog.

So my advice is to clearly define what is your blog about and stick to it! It's a simple rule - if you write about building web application then avoid posting videos from your holidays. Don't confuse your subscribers unless you write your blog only for your personal satisfaction and other things don't matter.