Posts From January 2010 - Musing, Rants & Jumbled Thoughts

Header Photo Credit: Lorenzo Cafaro (Creative Commons Zero License)

(Each month I plan to attend technical user groups in the Chicago area to (re-)learn from peers experiences with new and existing technologies and to network with like-minded techies. This blog is one in a series of recaps of some of the more interesting aspects of the meetings for my own purposes (this is a “web log” afterall) and for others to get a general taste of what’s available in the Chicago user group scene.)

User Group: Chicago Architects Group

http://www.chicagoarchitectsgroup.com/

Location: ITA (200 S Wacker - next to Sears Tower)

Meeting Date: Tuesday, January 19th, 2010

The CAG meetings I’ve been to have had between 10 and 20 attendees and tend to be more of the Architect/Team Lead types and less of the hard-core, in your face techies/developers (this is not a bad thing, in my opinion). Like most user groups, they have pizza and soda as well as giveaways at the end (today included Windows 7 Ultimate, Office 2007, a wireless mouse and several books). Personally, I find the meetings to be useful, although I know some of my coworkers who have come with me will likely not attend unless the topics are of great personal interest to them.

Presenter: Tim Murphy

Blog: http://geekswithblogs.net/tmurphy

Twitter: @twmurph

Topic: Dependency Injection and IOC Containers

I'll admit up front that this entry will be a little light on content, such as the pros/cons to using this pattern – mainly I’m making notes for me to reference later. Here is a link to the presenter’s wrap-up as well, with slides and code (using Unity and Windsor). http://geekswithblogs.net/tmurphy/archive/2010/01/19/cag-january-2010-wrap-up.aspx

But by all means, please read on :-)

The Dependency Injection pattern is intended to lessen the coupling of objects by having dependant objects provided to an object instead of having the object itself generate the dependant objects. For example, you have a Widget object which depends upon a LoggingManager object and a WidgetValidator object. Without Dependency Injection, you might instantiate your dependant objects insider your object constructor, like this:

namespace WrightThisBlog.blogspot.com
{
    public static class MainApp2
    {
        public static void main(object[] argv,  int  argc)
        {
            var  myWidget =  new Widget ();
            //do some widgetity stuff here
        }
    }
    
    public interface IWidget  {}
    
    public interface ILogger  { }
    
    public interface IValidator  { }
    
    public class LoggingManager  :  ILogger { }
    
    public class WidgetValidator  :  IValidator { }
    
    public class Widget :  IWidget
    {
        ILogger  _logger;
        IValidator  _validator;
        
        public  Widget()
        {
            _logger =  new LoggingManager ();
            _validator =  new WidgetValidator ();
        }
    }
}

But now you're tied to a specific implementation of ILogger and IValidator and if you ever wanted to change them, you'd have to modify your code anywhere you referenced those objects and replace them with your new ILogger object.   This is less than ideal and makes your code a bit fragile.

So how does Dependency Injection change this?   Basically, by having the caller provide the ILogger and IValidator to the Widget object, either as parameters in the constructor or as properties. This is further decoupled in my example by using factory classes:

namespace   WrightThisBlog.blogspot.com
{
    public static class MainApp2
    {
        public static void  main( object [] argv,  int  argc)
        {
            IWidget  myWidget =  WidgetFactory.GetWidgetInstanceViaConstructor();
            IWidget  myNextWidget =  WidgetFactory.GetWidgetInstanceViaProperties();
            //do some widgetity stuff here
        }
    }
    
    public class ImprovedWidget :  IWidget
    {
        public  ImprovedWidget( ILogger loggerToUse,  IValidator  validatorToUse)
        {
            LoggerToUse = loggerToUse;
            ValidatorToUse = validatorToUse;
        }
        
        public  ImprovedWidget() {}
        
        public ILogger LoggerToUse {  get ;  set ; }
        public IValidator ValidatorToUse {  get ;  set ; }
    }
    
    public static class WidgetFactory
    {
        ///<summary>
        ///Could provide in constructor
        ///</summary>
        public static IWidget  GetWidgetInstanceViaConstructor()
        {
            ILogger  logger =  LoggerManagerFactory.GetLoggerInstance();
            IValidator  validator =  WidgetValidatorFactory.GetValidatorInstance();
            return new ImprovedWidget (logger, validator);
        }
        
        ///<summary>
        ///Or via properties (beware unset properites!)
        ///</summary>
        public static IWidget  GetWidgetInstanceViaProperties()
        {
            return new ImprovedWidget
                {
                    LoggerToUse =  LoggerManagerFactory.GetLoggerInstance(),
                    ValidatorToUse =  WidgetValidatorFactory.GetValidatorInstance()
                };
        }
    }
        
    public static class LoggerManagerFactory
    {
        public static ILogger  GetLoggerInstance() {  return new LoggingManager (); }
    }
    
    public static class WidgetValidatorFactory
    {
        public static IValidator  GetValidatorInstance() {  return new WidgetValidator (); }
    }
}

Now you’ve created a loose coupling between your Widget and its supporting objects, which will come in handy when

  • You want to use a mocking tool (like RhinoMocks) to unit test your code
  • You want to replace your LoggingManager with something else that implements ILogger – now you only need to update the factory classes (technically, using a factory class is a different pattern, but works well here)

Still, you’re coding in a concrete implementation in your factory, so while you’ve isolated the number of places you need to update, you’re still hard-coding in an implementation.   Additionally, if there are very deep dependencies (you’re Logger needs a FileManager which needs a PermissionsManager which needs a CurrentUserManager….), this can get pretty ugly to manage and you end up writing a lot of plumbing code that is only tangential to the application’s real purpose.

This is where the IOC Containers come into play.   Using an IOC framework, you define which concrete classes implement your interfaces and what dependencies they have.   Then you use the IOC container like a factory class to instantiate your objects.   There are two primary means for defining the dependency trees: using XML in your app.config or via code.   Per the group discussions, some IOC frameworks provide tools (such as Structure Map) which will auto-generate your code mappings, while others (like Microsoft’s Unity Framework) use [Dependency] attributes to denote where there are dependencies.   This allows you to say “I need an IWidget” and the container framework will know that ImprovedWidget should be created and it has dependences on ILogger and IValidator, which are provided by LoggerMananger and WidgetValidator, and so on.

using   Some.IOC.Framework;

namespace   WrightThisBlog.blogspot.com
{
    public static class MainApp3
    {
        public static void  main( object [] argv,  int  argc)
        {
            var  IOC_container = IOC_Framework.GetContainer();
            IWidget  myWidget = IOC_container.GetObject<IWidget>();
            //do some widgetity stuff here
        }
    }
}

Resources and Reference:

Some IOC Frameworks:

  • Structure Map
    • Has “scanning” tool to auto-map dependencies and limit the amount of manual configuration needed. (If only one class in your project implements a given interface, that class will be used when that interface is requested).
  • Ninject
  • Castle Windsor
    • Uses app.config to defined assemblies/classes to use for each interface.    Uses constructor to provide dependencies.
  • Microsoft Unity (part of P & P group)
    • Uses [Dependency] attributes places on class properties to determine where dependencies are needed, as well as to get/set them.

Some links:



(Each month I plan to attend technical user groups in the Chicago area to (re-)learn from peers experiences with new and existing technologies and to network with like-minded techies. This blog is one in a series of recaps of some of the more interesting aspects of the meetings for my own purposes (this is a “web log” afterall) and for others to get a general taste of what’s available in the Chicago user group scene.)

UserGroup: Chicago Alt.Net

Location: Willis (aka Sears) Tower


Meeting Date: Wednesday, January 13, 2010


If you're in the Chicagoland area and do any .Net programming, I recommend this group.  It's got a decent turnout (usually). I've been to three meetings with anywhere from 20 - 40 people in attendance.  And they have good giveaways if you stay till the end. Today was especially good: A copy of Windows7 Ultimate, Office 2007, the JetBrain's product of your choice (ReSharper, dotTrace, TeamCity, IntelleJ -- great products if you're not familiar), $50 Barnes & Noble card, and of course a few Microsoft tech books. And free pizza!


Presenter:John Nuechterlein (aka Jdn) (http://www.blogcoward.com).

Topic: CQRS in roughly an hour or so

(CQRS == "Command Query Responsibility Segregation")


So, I'll be honest: Going into this presentation, I had only a very vague idea of what CQRS is and even less idea why I should care. (I had actually planned to play poker that night instead, but poker got canceled L).  Leaving this presentation, my mind was swirling with all of the projects where CQRS would have been great to use (and the many parallels with the "massively" distributed computing environment we developed at Wayport.  -- where "massive" is defined in pre-Google, pre-Facebook terms).

So, what is CQRS?  Frankly, I couldn't do it justice in this blog -- but I needn’t try, since my smarter, more eloquent people have done so before me. Actually, from what I understand, CQRS got its start in the blogosphere. For more depth, here is one key link (
http://elegantcode.com/2009/11/11/cqrs-la-greg-young/), but here's my lame attempt at a quick definition:


CQRS is a system architecture/design pattern that separates the act of reading data (query) from taking action (command) in order to produce a system which easily scales and provides some useful benefits (such as "playable" event logs) that make the maintenance of the system less burdensome. (For my purposes, I'm going to designate CQRS as an "architecture", partly because I don't want to write "architecture/design pattern" anymore.)



In my mind, CQRS lends itself pretty well to web-based systems, and SOA/SaaS at that, although it could be applied elsewhere.  I'm probably jumping ahead of myself a bit, but that statement is useful for backing this next statement. To consider using CQRS, you must buy into a fundamental assertion: your data is always stale.  How stale depends on your system, but the fact that my last blog entry was about data caching techniques just goes to show that particularly in web-based system, we intentionally make some of our data stale by putting it into a cache. 



But even without intentional caching, your data is stale.  Consider this: you have an eCommere site (say, amazon.com). Your end user pulls up the product page for the Widget2010 product, which includes price and quantity in stock information. In the 60 seconds it takes the user to read the page and click the shopping cart button, five other people have ordered their own Widget2010, so the quantity in stock is now less than what's on the first user's screen -- thus, the data is stale.  You know this -- you expect this -- and you've already coded a dozen ways to deal with this, so accepting that "your data is always stale" is really not that big a leap of faith.

So why is that important?  Because the CQRS architecture separates the reading of data and the acting upon data into two separate logical areas, where the reading area has a data cache which is stale.  Now, it may only be 1 ms stale, but stale none the less.

So let's dive in to my high level summary of Jdn's high level overview. (note: if you want to bypass my bias and possibly complete misinterpretation of the presentation, it is (or will soon be) available on the http://chicagoalt.net website in video form):

CQRS has four logic areas in your system design, as pictured in this drawing stolen squarely from the blog linked above:


Queries
This is the "reporting" piece -- or said another way, this is the read-only interface into your data. Basically, it's a lightweight data layer reading from your data store and proving DTOs back to your UI. It does NOT go through your Domain Model.  It is simply a read-only view into your database.  Your database, however, is really a cache of your data, and likely a local cache.  In other words, since the data is just a cache of the "real" data, why not push it out as close to the UI as possible to minimize the latency to your UI (since the majority of your UI's interactions with your data is reading, making this link super-fast will in turn make your system faster). Since these are read-only views into your data, the DTOs are extremely simple and can/should be specific to the consumer (ie: no "product" DTO, but rather a productForOrderPage which only has the data needed by the order page) reducing the amount of data getting dragged around between layers, etc.  Why pull your full product from the db if you only need 20% of the information for the current page? Need to scale your UI? Just add another data store/cache -- it's practically a rubber stamp.

Commands
The next logical area of this architecture is the Command Bus/Command Handlers. This is where the next tenant of CQRS comes into play: system actions (aka "commands") have specific intent.  If you've practiced Agile Development and you're familiar with the concept of the User Story -- basically requirements as documented from the viewpoint of a user's interaction with the system -- then this may be easy to grasp.  Basically, Commands are actions taken on your Domain objects.  For example: CustomerChangedAddressCommand or AddProductToShoppingCartCommand.

Commands are the only way to update data in your system and are seen as atomic actions which can be wholly accepted or rejected. Commands are generated by the UI and pushed onto the Command Bus and picked up by Command Handlers.  Command Handlers in turn pass those commands into the Domain.  Since commands go onto a bus, they can be queued, prioritized, etc, just like any message on a messaging bus, thus allowing you to ensure your most important commands are handled appropriately (another scalability knob you can adjust to meet your performance needs).

Now, since commands require specific intent (ie: user updates their address/places item into shopping cart), this does have implications for your UI -- specifically, you can't have "Excel-like screens".  There's no "update everything about this user and here's all the data" command (or there shouldn't be), so if that's what you're looking for, you may want to look elsewhere.  But honestly, this may not be a bad thing, as it forces some system designs which will likely result in more user friendly, reliable systems in the long run.

Internal Events / Domain
This area is the authoritative knowledge source.  Your business logic resides here. Your Domain objects reside here.  Here is where you'll find the Event Store. This is the most complicated part of the system, and is where the presenter struggled at times to explain some concepts / answer some questions, so this is where you'll likely want to ensure you've done your homework. (To his credit, Jdn fully admitted up front he did not know all of the ins-and-outs of this area and did his best to explain).

As commands come into your Domain from the Command Handlers, your Domain objects validate business rules to determine if the command is valid for the current state of the world and either reject the command en-whole, or, in one atomic action, update the state of the world according to the command (thus, an “event” occurs). 


Events are written to the Event Store, which is persisted, likely in an RDBMS, ODBMS, etc.  A snapshot of the state of the world is taken periodically and if you want to recreate a place-in-time, just take the previous snapshot and replay the events in order from that snapshot until the place-in-time you care about. Your domain could just remain in memory, if you'd like -- otherwise you'd pull the most recently stored (via snapshot) version of your domain object(s) and replay any new events against that object until it's fully restored, then execute your new command and presto-chango!

Now this was of particular interest to me from a troubleshooting/audit standpoint.  If a problem occurs on the site, pick a snapshot from right before the problem occurred and replay your events to reproduce the issue. (QA engineers applaud here.)  New version deployment go ghastly wrong, rollback the events until right before the deployment.  Theoretically, you could even re-run the events again against the older version of the software and (unless the events weren't supported in that version) recover the data changes.  Try doing that when your domain is specific to your db schema or your audit history is at the db table level (how do you replay an "insert" when the columns have changed?)

One note here is that your Event Store is "write only" -- meaning you don't ever delete things from your domain, you just adjust them.  The presenter used the analogy of an accountant ("accountants don't use erasers").  If an accountant finds an error in the ledgers, they don't edit that line item -- instead they create an adjustment line item to offset the difference.  The Event Store is similar -- you create events to adjust/negate/otherwise manipulate your data.

External Events / Publication
Now, to go full circle, the Domain / Event handling system will publish any events that it handles (but not those that it rejects).  Any data stores/caches will subscribe to that feed, and will update themselves based on those events. Thus, your data cache is only as stale as it takes to process the published events.  Again, you use a message bus here (or a webservice, etc) and use prioritization queues if you wish to enhance performance/scalability.

This is where the concept of "Eventual Consistency" is used -- that is, your Domain and your Data Stores will eventually sync up, just not necessarily in "real time" -- but we're ok with that because we've agreed that latency is almost always ok and availability trumps correctness in the Data Stores -- because "your data is always stale" anyway.

And the big finish...
To conclude, the presenter touched on reasons why you wouldn't want to use CQRS, including:

·         it's new, it's different
·         multiple data stores (maybe you don't want this)
·         operational complexity
·         lots of commands, events, handles, etc

…and my conclusion
All-in-all, this is definitely something I will consider for large systems in the future, although it is likely overkill for most systems in the market I serve.  I suggest doing some Google searches if you're interested at all in learning more (seems to be a good deal of data, videos, webinars, blogs, etc. out there).




When writing ASP.Net applications, you often have a want/need to cache data in the UI layer for reuse. Often this is used to improve performance (limit repeated database calls, for example) or store process state. Below is an overview of various ways to achieve this for various scenarios.

Executive Summary:

(ordered from shortest to longest typical duration)

Method Scope Lifespan When to Use
ViewState Current page/control and user (each page/control has its own viewstate) Across post-back You need to store a value for the current page request and have it retrieved on the next post-back.
Base Class Current page/user and it's child controls Current instance You need to store a value once per page request  (such as values from a database) and have access to it during the current page request only
HttpContext Current page/user and it's child controls Current instance You need to store a value once per page request  (such as values from a database) and have access to it during the current page request only
ASP Session Site-wide, current user, all pages/controls Duration of user's session You need to store a value once per user's visit to the site (such as user profile data) and have access to it from any code for the duration of the user's visit
ASP Cache Site-wide, All pages/controls, all users Until expires or server restarts You need to store data for access from any code for all users (such as frequently used, but rarely changed, database values -- such as a list of Countries for an address form).
Cookies Site-wide, current user, all pages/controls Until expires or browser deletes You need to store small data (such as user's uniqueId) from one visit to the next, or possibly across sites. Not for sensitive data!

ViewState:

If you’re an ASP.Net developer, you should have a firm grasp of ViewState and all its benefits and drawbacks. Basically, ViewState allows you to store data in a special hidden input field which is provided back to you when the user posts-back the page. This is similar to just using a hidden field, except that it is page-/control-specific (meaning, if you have a user control that is repeated on the page, each instance of the control can store its own ViewState data with the same key and get back its individual results). It also includes some basic security  and compression.

// Set ViewState value while rendering the page
protected override void OnPreRender(EventArgs e)
{
    base.OnPreRender(e);
 
    // Set ViewState value
    this.ViewState.Add(“MyComputedValue”, (IList)BLL.DoSomeComputationThatShouldOnlyRunOnce()); 
}
 
// After post-back, retrieve the value
protected void Page_Load(object sender, EventArgs e)
{
 
    IList myValue = (IList)this.ViewState[“MyComputedValue”];
    if (myValue == null)
    {
        myValue = BLL.DoSomeComputationThatShouldOnlyRunOnce();
    }
 
}

I would discourage the use of ViewState for storing anything more than just very small pieces of data, since this information is included in the rendered HTML and has to be downloaded/uploaded with each request, thus degrading performance. You can configure Viewstate to use Session for it’s storage and eliminate the need to include it in the page’s HTML, but if you’re going that route, why not just use Session directly for your caching location (see below)?

BasePage (shared base class):

A common pattern I’ve used on just about every project and highly suggest it for many reasons, is to have a "BasePage” class which inherits System.Web.UI.Page, then have all of my application pages inherit from BasePage. This allows the developers to create shared "shortcuts” in one location which are accessible from all of our UI layer code.

Among other useful shortcuts (like storing singletons, etc), you can create properties on your BasePage class for storing cached data during the current page invocation. For instance, if you’re using the ASP.Net membership providers, you can store the current authenticated user in your BasePage so that you’re not going to the database everytime you call Membership.GetUser()

Note too, that this pattern can be combined with the other patterns listed, such as having a property that reads/writes data from Session, Cookies, etc., allowing for reduced code duplication.

using System.Web;
using System.Web.Security;
 
namespace MyProject.WEB
{
    public abstract class MyBasePage : System.Web.UI.Page
    {
    
        /// 
        /// Cached reference to Membership.GetUser(); (Currently authenticated user, or null if not auth’d)
        /// From Membership.GetUser():
        /// Gets the information from the data source and updates the last-activity date/time stamp for the current logged-on membership user.
        ///

        /// A System.Web.Security.MembershipUser object representing the current logged-on user.
        internal MembershipUser AuthenticatedUser
        {
            get
            {
                if (_authedUser == null)
                {
                    _authedUser = Membership.GetUser();
                }
                return _authedUser;
            }
        }
        private MembershipUser _authedUser;
    }
}

To follow this further, you can create a ControlBase class for your user controls which has a typed reference to the BasePage:

namespace MyProject.WEB
{
    public abstract class MyControlBase : System.Web.UI.UserControl
    {
        protected MyPageBase BasePage
        {
            get { return (MyPageBase)Page; }
        }
    }
}

Now, from within your control, you can do this.BasePage.UserName to get the currently logged-in username without having to go to the database more than once per page rendering.

HttpContext:

You can use the HttpContext Items array to store values for the duration of the current page rendering (similar to the PageBase pattern above). Personally, I prefer using the PageBase pattern, but there are some cases where this isn’t possible, such as when your working within a CMS framework like SiteCore and don’t actually have access to the page. (SiteCore only allows you to create user controls and place them via their CMS framework).

/// 
/// Cached reference to Membership.GetUser(); (Currently authenticated user, or null if not auth’d)
/// From Membership.GetUser():
/// Gets the information from the data source and updates the last-activity date/time stamp for the current logged-on membership user.
///

/// A System.Web.Security.MembershipUser object representing the current logged-on user.
internal MembershipUser AuthenticatedUser
{
    get
    {
        if (HttpContext.Current.Items[“CurrentUser”] == null)
        {
            HttpContext.Current.Items[“CurrentUser”] = Membership.GetUser();
        }
        return (MembershipUser)HttpContext.Current.Items[“CurrentUser”];
    }
}
 

ASP.Net Session:

Using ASP.Net Session will provide a way to store data across page views for the duration of the user’s visit to the site. Be careful – I’ve seen many people get tangled up with stale session data, particularly on initial page loads. For instance, a user clicks on a button which will initiate an Add/Edit popup in edit mode for a product.  The developer stores the product info in Session, then opens the popup control, which checks Session for a product and goes into edit mode if product data exits.  The user changes their mind and closes the popup (but the developer forgets to clear the product from session in this case).  Then the user clicks the "Add new product" button, showing the same control which should be in "add" mode, but since there is a stale product in session, it enters edit mode for the previous product instead.   Make sure that if a user is returning to a page after previously storing page state in session that you correctly handle the potentially stale data.

public SortDirection LastSortDirection
{
    get
    {
        //Note: will return null if no value exists in Session
        return (SortDirection)HttpContext.Current.Session[“SortDir”];
    }
    set
    {
        HttpContext.Current.Session[“SortDir”] = value;
    }
}

ASP.Net Cache:

The ASP.Net Cache can be used to store objects for a predetermined amount of time across all page requests (ie: at the server level). This is useful for data read from the database that isn’t often changed, such as a list of options for a drop-down list.

internal List<String> DropDownListOptions
{
    get
    {
        if (HttpRuntime.Cache[“DropDownListOptions”] == null)
        {
            HttpRuntime.Cache.Insert(“DropDownListOptions”, DAL.GetListFromDatabase(), null, DateTime.Now.AddHours(24), System.Web.Caching.Cache.NoSlidingExpiration);
        }
        return (List<String>)HttpRuntime.Cache[“DropDownListOptions”];
    }
}

Cookies:

Cookies can be used to save data on the client side and have it returned to you on postback. Note, however, that unlike the other storage mechanisms, cookies have two different storage locations: one for the inbound value and one for the outbound value. So you can’t (at least, not without some additional logic) write a value, then read it again for use later in your page logic (your "read" will just re-get the original value, not the updated value). Generally, I would suggest reading the value at page load, storing it in a property on your page class, then writing it out again in your PreRender code.

Also note that not setting a cookie value on your response is not the same as deleting the cookie. The browser will keep the last cookie received until it expires or is explicitly overwritten.

Warning: Cookies are stored on the user’s machine, so don’t store sensitive data there and always validate the values you get back (it’s easy to tamper with the values). Encryption is suggested, as is setting the ".Secure” property to restrict transport to HTTPS.

private const string COOKIE_NAME = “MyCookie”;
 
/// 
/// Update the cookie, with expiration time a given amount of time from now.
///
public void UpdateCookie(List<KeyValuePair<string, string>> cookieItems, TimeSpan? cookieLife)
{
    HttpCookie cookie = Request.Cookies[COOKIE_NAME] ?? new HttpCookie(COOKIE_NAME);
    
    foreach (KeyValuePair<string, string> cookieItem in cookieItems)
    {
        cookie.Values[cookieItem.Key] = cookieItem.Value;
    }
    
    if (cookieLife.HasValue)
    {
        cookie.Expires = DateTime.Now.Add(cookieLife.Value);
    }
    Response.Cookies.Set(cookie);
 
}
 
public string ReadCookie(string key)
{
    string value = string.Empty;
    
    if (Request.Cookies[COOKIE_NAME] != null)
    {
        value = Request.Cookies[COOKIE_NAME].Values[key];
        //UpdateCookie(cookieName, value); //optional: update the expiration so it rolls outward
    }
    
    return value;
}
 
public void DeleteCookie()
{
    var cookie = new HttpCookie(COOKIE_NAME)
    {
        Value = string.Empty,
        Expires = DateTime.Now.AddDays(-1)
    };
    Response.Cookies.Set(cookie);
}