The Phoenix pay system snowball.

The Phoenix pay system fiasco could get a whole lot worse before it gets better. We are witnessing a massive software project failure unroll in plain sight. Is the growing snowball of compounded failures gaining momentum or coming to a halt?

As the financial and productivity losses accumulate. So too does the confidence in the government IT services industry. The public discourse surrounding the controversy has been inadequate, to say the least. We need more voices from experts in the practice to speak up. Too much focus has been aimed at blame rather than analysis of causes and solutions. The government, right or wrong, will seek to appease the loudest voices in this discussion. Let’s try to make the message the right one. If not, we risk repeating the same mistakes recurrently.

Legacy software systems cannot live forever. As technology platforms age the resource talent pool available for system maintenance shrinks. Resources allocation becomes problematic. The potential for an unskilled workforce increases. Time delays creep into project initiatives. Another issue with legacy software relates to third-party vendors. Software systems always have dependencies on third-party technologies. Vendors of these technologies only support older product versions over a limited period. Vendors discontinue support once that period expires. The host software, in this case, can’t address problems with unsupported issues. These issues could be behavioral, performance related, security, etc.

I am not familiar the system preceding the Phoenix pay system. But it had been in place for 40 years. It’s very likely that it was in a legacy state and or in need of major upgrades. In the wake of all the blowback from the Phoenix failures. Many people opine that the existing system should have been left in place. But, what upgrades were needed? What’s the cost analysis of reverting to the old system? If we are considering the preexisting system as an option in resolving the pay system issues. It’s very important to answer those questions.

The sales team for COTS products like the system used in Phoenix are first-class. These products will save you time, money and accommodate all your business needs. Well, that is what the brochure says. Once you dig further into this more details emerge.

Many organizational processes are not preconfigured into the COTS systems. These processes typically take life in two ways. In some instances, system administrators input a series of instructions into the system. In other cases an application developer programs the customization as a system extension. These scenarios usually make up a small percentage of the required functionality. But, they are the highest risks. The system can only perform as well as the quality of the instructions. Garbage-in-garbage-out as they say.

These COTS systems don’t work without data. I would imagine that a pay system for all employees in the federal government requires a great deal of data. Migrating data from an existing pay system to a new system would be a significant project task. How many of the problems currently encountered in Phoenix relate to data migration? It’s not zero…

The issue of user training seemed to consume a few media cycles. Naturally, system users need training on any new product. Not only are the user interfaces different between products. But, it’s also likely logistics were changed and or introduced as part of the new system. There’s a ramp-up time associated with user training, especially on a new system. These ramp up times do tend to recede as the knowledge propagates into the user community.

I’m not sure how concrete the government’s plans are after the release of the #budget2018. But, they need to tread carefully. Migrating to another system offers no guarantee we won’t encounter similar issues. Customization work, data migration tasks, and training will all need repeating.

Big data – Partitioned views using Entity Framework

Working with large data sets creates a unique challenge for software developers.  Balancing simplicity, maintainability and performance is near impossible, but it’s nevertheless something developers should strive for.

In regards to maintainability and simplicity, Entity Framework has been a big winner for many organizations.  Entity framework is a time saver on many fronts, on top of it’s object-relational mapping capabilities it also allows developers to easily create queries, database tables and stored procedures.

On the performance side many problems associated with massive data tables can be greatly alleviated by using a technique, available from many database vendors, known as partitioned views. Partitioned views allow large data tables to be split into ranges of data and stored as multiple member tables.  By doing so queries can be optimized to seek data of a specific range yielding significant performance gains.

Partitioned views can quite easily be imported within an Entity Framework model, however there will be some limitations.  Inserting and updating records of a view using Entity framework can be problematic, it requires trickery on the .Net side and specific conventions on the SQL server side.  There’s also the issue of maintaining the partitioned view.  How are new member tables added, how is the view updated to include new member tables?

In this article I will walk through the creation of a code-first entity framework library that will dynamically create partitioned views,  and enable view data modifications.

Partitioned views

Before I dive into Entity Framework magic. I’ll quickly outline the steps required to create partitioned views in SQL Server.

First, a “key” identifying  records of a given data range must be determined.  If data is primarily queried by year, a year column would be used for the data range key.

Once a logical data range key is determined it’s time to create member tables.  Three years worth of data would result in three separate members tables. In order to make the SQL server query optimizer aware of the data range key, a check constraint is added on the column.  The create table scripts look as follows:


CREATE TABLE [dbo].[PokerHand2011](
 [Id] [bigint] IDENTITY(1,1) NOT NULL,
 [Year] [int] NOT NULL CHECK(Year=2011), 
 [Action] [nvarchar](max) NULL,
 [Amount] [decimal](18, 2) NOT NULL,
 [PlayerName] [nvarchar](max) NULL,
 [PokerSiteHandId] [nvarchar](max) NULL, 
 CONSTRAINT PK_PokerHand2011 PRIMARY KEY ( [Id], [Year])
)

GO

CREATE TABLE [dbo].[PokerHand2012](
 [Id] [bigint] IDENTITY(1,1) NOT NULL,
 [Year] [int] NOT NULL CHECK(Year=2012), 
 [Action] [nvarchar](max) NULL,
 [Amount] [decimal](18, 2) NOT NULL,
 [PlayerName] [nvarchar](max) NULL,
 [PokerSiteHandId] [nvarchar](max) NULL, 
 CONSTRAINT PK_PokerHand2012 PRIMARY KEY ( [Id], [Year])
)

GO

CREATE TABLE [dbo].[PokerHand2013](
 [Id] [bigint] IDENTITY(1,1) NOT NULL,
 [Year] [int] NOT NULL CHECK(Year=2013), 
 [Action] [nvarchar](max) NULL,
 [Amount] [decimal](18, 2) NOT NULL,
 [PlayerName] [nvarchar](max) NULL,
 [PokerSiteHandId] [nvarchar](max) NULL, 
 CONSTRAINT PK_PokerHand2013 PRIMARY KEY ( [Id], [Year])
)

Once all the member tables are created the partitioned view is created  as a union on all members tables.


CREATE view PokerHand
AS

select * from PokerHand2011 union all

select * from PokerHand2012 union all

select * from PokerHand2013
GO

Pretty simple stuff. With the partitioned view created data can now easily queried over the entire set of data or a specific range of data all from the same view. The query execution plans below highlight this behavior.

Full set

Specific Range

Code-first entity framework

OK, now that the SQL academics are out of the way it’s time to write some .Net code.

In this section I’m going step through the creation of a self-contained library that creates partitioned views based on plain old C# objects (POCO) and exposes  the partitioned view through the entity framework with read, insert, update and delete capabilities.

Configuration of the partitioned view

In order to create a partitioned view for a POCO, the library will need to know which properties of the C# object compose to the primary key, and which properties compose the data range key.  I use the following configuration class for this:


public class PartitionedViewConfiguration<T>
{
public Expression<Func<T, Object>> PrimaryKeyExpression { get; set; }
public Expression<Func<T, Object>> DataRangeKeyExpression { get; set; }
}

Which can be used as follows:


config = new PartitionedViewConfiguration<PokerHand>
{
DataRangeKeyExpression = ph => new { ph.Year, ph.Month },
PrimaryKeyExpression = ph => new { ph.Id, ph.Year, ph.Month }
};

Creating data member tables

Creating tables using code-first entity framework is very easy.  You register a type with a DbContext, once the DbContext is initialized for the first time, entity framework creates a table corresponding to that type.  For the purpose of this library, I don’t have types for the member tables I want to create.  I can however create them dynamically as needed by inheriting the partitioned view type!


private Type CreatePartitionTableType(Type partitionedViewType, string suffix)
{
var asm = new AssemblyName(String.Concat(partitionedViewType.Name, "PartitionedViewMemberTables"));
var asmBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly(asm, AssemblyBuilderAccess.Run);
var moduleBuilder = asmBuilder.DefineDynamicModule("MemberTables");
var typeName = String.Concat(partitionedViewType.Name, suffix);
var typeBuilder = moduleBuilder.DefineType(typeName);
typeBuilder.SetParent(partitionedViewType);
return typeBuilder.CreateType();
}

I can now register the newly created type with a DbContext as follows.


public class MemberTableDbContext<T>:DbContext where T:class
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.RegisterEntityType(DataType);
modelBuilder.Types().Configure(c => c.HasKey(PrimaryKeyPropertyNames));
base.OnModelCreating(modelBuilder);
}
public Type DataType
{
get { return typeof(T); }
}

//...

}

In the code above, “T” is the type that was created dynamically.  I create one DbContext instance per member table type for a reason.  Notice the DbContext configures the primary key, but it does not create the check constraint required for the data range key.  There isn’t a great way of creating these constraints using entity framework.  Therefore, I need to create these constraints using TSQL once the member table is available.  But when is a table created using entity framework? By default, code-first entity framework calls the DbContext initializer once per app domain.  Creating one DbContext per member table is what allows me to know when member tables are created.  To put it all together,  I create my member table data type dynamically, instantiate the DbContext, and create the check constraints as follows:


public virtual void AddConstraintCheckIfEqual(string tableName, string columnName, object value)
{
var cmd = String.Format("alter table {0} add check({1}={2})",
tableName,columnName, SqlSafe(value));
emptyContext.Database.ExecuteSqlCommand(cmd);
}

Creating partitioned views

The partitioned view can easily be exposed to the entity framework by registering the partitioned view type with a DbContext.  However, entity framework cannot create views, the library will need to create the view upon initialization of the DbContext and once all member tables for the view have been created.  The code looks as follows:

public class PartitionedViewAdapter<T>:DbContext where T : class
{
 public IDbSet<T> View { get; set; }
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
InitializeMemberTables();
CreateView();
modelBuilder.Entity<T>().ToTable(ViewName).HasKey(Config.PrimaryKeyExpression);
}
private void CreateView()
{
var keys = GetDataRangeKeys();
var memberTableNames = keys.Select(dataRangeKey => PartitionTablePrefix + dataRangeKey);
DatabaseAdapter.CreateOrAlterPartitionedView(ViewName, memberTableNames);
}
//...
}

DatabaseAdapter code


public virtual void CreateOrAlterPartitionedView(string viewName,IEnumerable<string> memberTableNames)
{
var createOrAlter = ObjectExists(viewName) ? "alter" : "create";

var selects = String.Join(" union all ",
memberTableNames.Select(tableName => "select * from " + tableName));
emptyContext.Database.ExecuteSqlCommand(createOrAlter + " view " + viewName + " as " + selects);
}

Modifying view data

The last problem to solve with this library is to support insert updates and deletes on the partitioned view.  One approach is to configure entity framework to use stored procedures for entity modifications.  This is accomplished by using the MapToStoredProcedures method of the EntityTypeConfiguration object.  The stored procedures can use case statements to defer the operation to the appropriate member table.  Another option is to override the SaveChanges on the partitioned view DbContext and delegate all entity modifications to the member table DbContext instances.  The code looks as follows:


public override int SaveChanges()
 {
 var objectsWritten = 0;
 foreach (var o in ChangeTracker.Entries<T>().Where(e=>e.State!=EntityState.Unchanged))
 {
 var dataRangeKey = GetDataRangeKey(o.Entity);
 var memberTable = memberTables.Single(mt => mt.DataRangeKey == dataRangeKey);
 var copy = CloneTo(o.Entity, memberTable.DataType);
 memberTable.DbContext.Entry(copy).State = o.State;
 objectsWritten += memberTable.DbContext.SaveChanges(); 
 Copy(memberTable.DbContext.Entry(copy).Entity as T, o.Entity);

if (o.State == EntityState.Deleted)
 o.State = EntityState.Detached;
 else
 o.State = EntityState.Unchanged;
 }
 return objectsWritten;
 }

The CloneTo method creates an instance of the member table data type and copies the values from the partitioned view entity to the member table entity.  The Copy method copies the values of the member table data type to partitioned view entity.  The other thing to notice is that I changed the EntityState of the changed objects so that operations are not repeated on subsequent calls to SaveChanges.

Parting notes

This wraps up my article on partitioned views and entity framework. Working with partitions views traditionally adds significant overhead to development efforts.  As displayed in this article, code-first entity framework can be of great help in the management of partitioned view object creations.  All that is needed to really make this library complete is a BulkInsert, an exercise for the future.

The code used for this article can be found on github.

Disclaimer, some of the methods (CloneTo, Copy, CreatePartitionTableType) are there for academic reasons.  I would override these with more robust 3rd party alternatives in a commercial setting.

Debunking 2 major myths about dependency injection containers

There’s been lively conversations over the use and merits of dependency injection (DI) containers over the years. I’m a pragmatic software developer. I try doing what works best for any given situation. There are instances where I believe DI containers are of value and there are instances where I believe they are misused.  Here are two major misconceptions leading to what I believe is an improper use of DI containers.

  • DI containers improve testability
  • DI containers reduce coupling

DI containers improve testability

This is very common reason given when people advocate the use DI containers. The theme here is that by using a DI container one can easily swap out dependencies for mocks and stubs.  While this is true it’s also equally true that this can be done without using a DI container.

Let’s take a look at a typical DI container scenario.  The following code will use the Unity DI container and the Moq framework.

public interface IOrderRepository
{
    decimal GetOrderAmount(int orderId);
}
public class OrderRepository : IOrderRepository
{
    public decimal GetOrderAmount(int orderId)
    {
        throw new NotImplementedException();
    }
}
public class CustomerService
{
    public IOrderRepository OrderRepository { get; set; }
    public CustomerService(IOrderRepository repository)
    {
        OrderRepository = repository;
    }
    public decimal GetOrderAmountDue(int orderId)
    {
        var orderAmount = OrderRepository.GetOrderAmount(orderId);
        return ApplyTaxes(orderAmount);
    }
    private decimal ApplyTaxes(decimal amount)
    {
        //...
    }
}

[TestMethod]
public void CanGetAfterTaxesAmountDue()
{
    UnityContainer container = new UnityContainer();
    var mock = new Mock<IOrderRepository>();
    mock.Setup(m => m.GetOrderAmount(0)).Returns(20);
    container.RegisterInstance<IOrderRepository>(mock.Object);
    var service = container.Resolve<CustomerService>();
    var amountDue = service.GetOrderAmountDue(1);
    Assert.AreEqual(22.5m, amountDue);
}

And now a similar sample without using a DI container

public class CustomerRepository
{
    public virtual decimal GetBalance(int custId)
    {
        throw new NotImplementedException();
    }
}
public class InsuranceService
{
    public CustomerRepository CustomerRepository { get; set; }
    public InsuranceService(CustomerRepository repository)
    {
        CustomerRepository = repository;
    }
    public decimal GetCoverageAmount(int custId)
    {
        var balance = CustomerRepository.GetBalance(custId);

        return CalculateCoverageAmount(balance);
    }
    private decimal CalculateCoverageAmount(decimal amount)
    {
        //...
    }
}
[TestMethod]
public void CanGetCoverageAmount()
{
    var mock = new Mock<CustomerRepository>();
    mock.CallBase = false;
    mock.Setup(m => m.GetBalance(0)).Returns(20);
    var service = new InsuranceService(mock.Object);
    var coverageAmount = service.GetCoverageAmount(1);
    Assert.AreEqual(12.5m, coverageAmount);
}

These samples are very simple, however this same technique can be applied to more complex scenarios as well.  The basic principle behind this technique is simple.

  • Mock classes using polymorphism
  • Inject the mocked objects into the class being tested

I used a mock framework in these samples, but it’s not required.  I could have created a mock class that implements IOrderRepository in scenario 1. For scenario 2 I could have created a mock class that extends CustomerRepository.

DI containers are sometimes used for their service locator capabilities. A fairly common scenario occurs when putting legacy code under test.  Take this following class.

public class CustomerManager
{
 ...
 public IEnumerable<int> GetOrderNumbers()
 {
  return new OrderRepository().GetOrders().Select(o=>o.Id);
 }
}

If the task is just to put CustomerManager under test and changing any outside code isn’t an option at the moment.  Dependencies can easily be injected with a DI container by doing the following.

public class CustomerManager
{
 ...
 public UnitContainer UnityContainer {get;set;}
 public IEnumerable<int> GetOrderNumbers()
 {
  return UnitContainer.Resolve<OrderRepository>().GetOrders...
 }
}

This can also be accomplished without a DI container using any of the creational patterns.  It can be as simple as the following.

public class CustomerManager
{
 ...
 public Func<OrderRepository> OrderRepositoryProvider = () => new        OrderRepository();
 public IEnumerable<int> GetOrderNumbers()
 {
  return OrderRepositoryProvider().GetOrderAmount();
 }
}

The unit test just needs to change the “provider” and the task is complete.

I can go through a number of different scenarios, from my experience I can confidently claim that DI containers do not improve testability.

DI containers reduce coupling

It is said that DI frameworks reduce coupling by removing dependencies to concrete implementations.

We know this isn’t true as a rule since DI frameworks can resolve concrete classes.  I will also state that that there are cases where a DI container can reduce coupling.  For example, when you add a new dependency to class constructor, the calling code will not need to change in order to add this new dependency.  There are drawbacks to this as well.  These problems are not really the issue I have with the claim that DI containers reduce coupling.  My issue with the claim is two fold

  • Developers create abstractions where they are not needed
  • Having a class depend on an interface does not mean coupling has been reduced in the application.

The first point is a result of developers who misinterpret what an abstraction is.  An abstraction is not necessarily an abstract class, or an interface. Developers end up creating abstractions for things that are not actually abstract or have already been abstracted. The second point is that developers sometimes remove a dependency from a library that is only ever used in one application.  So while the library no longer has the dependency, the application still does. Removing the dependency from the library has accomplished nothing.  Let me use a few examples to clarify.

public class ChargeService
{
 ...
 public int GetChargeId(string chargeToken)
 {
  return (int)CreateSelectChargeIdCommand(chargeToken)
   .ExecuteScalar();
 }
 public SqlCommand CreateSelectChargeIdCommand(string chargeToken)
 { ... }
}

The ChargeService has a dependency on sql server.  If we want to switch database engines the ChargeService class will need to change.  To avoid this we create an abstraction.  We accomplish this by doing the following.

public class ChargeRepository
{
 public int GetChargeId(string chargeToken)
 {
  return (int)CreateSelectChargeIdCommand(chargeToken)
    .ExecuteScalar();
 }
 public SqlCommand CreateSelectChargeIdCommand(string chargeToken)
 { ... }
}
public class ChargeService
{
 ...
 public ChargeService(ChargeRepository repository)
 {
  ChargeRepository = repository;
 }
 public void ChargeCustomer(string chargetoken)
 {
  var chargeId = ChargeRepository.GetChargeId(token);
 }
}

Now that I have abstracted the calls for the database engine. If I need to change database engines the ChargeService class does not need to change, I only need to change the repository class.  I didn’t need an interface to accomplish this.

There are many cases where an interface should be used for these types of abstractions.  I’m thinking of cases like writing a library that will be used in applications outside the control of the person writing the library. However in many cases, be it misconceptions or perhaps a slight case of over-engineering, unnecessary complexity and time of development have been added to projects.

That concludes my contribution to the dependency injection discussion.  I hope to contribute more in the future by sharing scenarios where I advocate the use of dependency injection containers.  In the meantime feel free to leave a comment.  I look forward to exchanging ideas.

 

Toastr code review

toastrToastr is a small notifications library written in javascript / jquery.  It’s a nice utility web developers can use to alert users to certain events, tasks, progress, errors, etc…

In in this review I’m going to talk about the single responsibility and open-closed principals.  I’ll also pitch the idea that in object oriented programming you don’t need “if” statements.

Before we dive into this, let me put the brakes on for second.

At ~400 lines of code this library is very tiny.  It’s core function is to show a pretty notification.  It’s going to be difficult not to come across as pedantic in reviewing a code library this small.  Code reviews are often thought of as an exercise in identifying potential problems with a given code base.  Code reviews can be much more than this, they are a great way of learning and sharing ideas even if the code is very small and trivial.

The source code used for this review can be found at the following url: https://github.com/CodeSeven/toastr/blob/master/toastr.js

Let’s first look at how this library is initialized.

; (function (define) {
    define(['jquery'], function ($) {
        return (function (){
        ....
        })();
    });
}(typeof define === 'function' && define.amd ? define : function (deps, factory) {
    if (typeof module !== 'undefined' && module.exports) { //Node
        module.exports = factory(require('jquery'));
    } else {
        window['toastr'] = factory(window['jQuery']);
    }
}));

So there’s a few things in here that are interesting.  That leading semi-colon isn’t a typo.  It’s there so that when javascript files are minified and concatenated together you get two statements instead of one.  The other thing to notice is that the authors play of a bit of code golf to support three load scenarios

1) AMD (dojo,backbone,etc.)

require(['toastr'], function (toastr) {
    ...
});

2) Node.js

var toastr = require('toastr');

3) Browser + jquery

<script src="jquery.js"></script>
<script src="toastr.js"></script>

Another thing that stood out to me is that the authors use an extra closure than I would have.

return (function () { ... })()

They do this because of their writing style. They call their functions before they write them.  \_(ツ)_/¯

Shrugs aside, I do like the start of the library, it’s very clear which functions are exposed by toastr.

var toastr = {
                clear: clear,
                remove: remove,
                error: error,
                getContainer: getContainer,
                info: info,
                options: {},
                subscribe: subscribe,
                success: success,
                version: '2.1.1',
                warning: warning
            };
 
            var previousToast;
 
            return toastr;

OK, let’s start looking at the guts of this thing. We’ll start with the notify function.

function notify(map) {
    var options = getOptions();
    var iconClass = map.iconClass || options.iconClass;
 
    if (typeof (map.optionsOverride) !== 'undefined') {
        options = $.extend(options, map.optionsOverride);
        iconClass = map.optionsOverride.iconClass || iconClass;
    }
 
    if (shouldExit(options, map)) { return; }

Outside from the struggle in setting an iconClass everything seems pretty innocent here.  I do have a bit of an issue with shouldExit.  The notify function should have one responsibility, show a notification.  Let’s take a peek at the shouldExit function.

function shouldExit(options, map) {
    if (options.preventDuplicates) {
        if (map.message === previousToast) {
            return true;
        } else {
            previousToast = map.message;
        }
    }
    return false;
}

I’m not crazy about this.  I don’t like that we’re setting state in this method. We also learn that preventDuplicates doesn’t prevent duplicates in all cases.  We would get three toasts from the following statements “toastr.info(1);toastr.info(2).toastr.info(1)”.

I would have preferred to see the existing code written something like the following:

function request(map){
    var options = getOptions();
    if(shouldNotify(options,map,activeToasts)){
        notify(options,map);
        activeToasts.push(map.message);
    }
}

OK, so preventDuplicates a little more robust, there’s value in that.  The shouldExit function was moved outside of the notify function because as I said, notify should have one responsibility.  But is this one responsibility thing important?  It’s certainly a concept espoused by many through the single responsibility principle. Let’s look at a few more functions.

function setCloseButton() {
    if (options.closeButton) {
        $closeElement.addClass('toast-close-button').attr('role', 'button');
        $toastElement.prepend($closeElement);
    }
}
function handleEvents() {
    $toastElement.hover(stickAround, delayedHideToast);
    if (!options.onclick && options.tapToDismiss) {
        $toastElement.click(hideToast);
    }
 
    if (options.closeButton && $closeElement) {
        $closeElement.click(function (event) {
            ...
        });
    }
    ...
}
function displayToast() {
    ...
    if (options.timeOut > 0) {
        intervalId = setTimeout(hideToast, options.timeOut);
        ...
        if (options.progressBar) {
            progressBar.intervalId = ...
        }
    }
}

If options are added or removed, several areas of the code need to be modified.  Everything would need to be retested. If several developers were working on the code base, they would be tripping over each other fighting all kinds of code merge issues.

So how can this toastr library be improved?  Notice that all the options add/enable functionality to this thing called a toast. We could say the options are decorating the toast.  Hey there’s a pattern for that!

Using the decorator pattern every toast option could be used to decorate toasts with new functionality.  In doing so, the code would be aligned with the open-closed principle.  The core of the library is closed to modifications, but open for extension.  The notify function shouldn’t have to be modified every time a new option is added, but it should be extensible so that new functionality can be added.

I’m going to walk through how a decorator pattern could be implemented for this library.  But first I want to talk technique.  Glance over the code and you’ll see “if” statements plastered all over the place.  It’s been said that in object oriented programming no “if” statements are needed.  This might sound a little crazy, and you could point to a scenario like the following and say you definitely need an “if” statement for that code.

if($('#' + options.containerId).length)
    return getContainer();
else
    return createContainer();

In a language like C# we could easily write this without an “if” statement.

public class ContainerState { };
public class CreatedState:ContainerState { };
public class NoContainerState : ContainerState { };
public Container GetContainer(CreatedState state) { ... };
public Container GetContainer(NoContainerState state){ ... }

This technique is useful in keeping functions to a single responsibility.

OK back to the decorator, I’m going to use this technique in the decorator implementation.  As I run the decorator, I’m going to override functions for the given state of the options.  Look at the setSequence function.

function setSequence() {
    if (options.newestOnTop) {
        $container.prepend($toastElement);
    } else {
        $container.append($toastElement);
    }
}

In my decorator, when the newestOnTop option is set, I will override setSequence to be

function(){$container.prepend($toastElement);}

When the option isn’t set, setSequence will be this function

function(){ $container.append($toastElement); }

The next step in creating my decorator involves creating a “decorate” function for every toast option.   To do this I’m going to go through the code library, any place I find functionality specific to an option, I will add this code to a “decorate” function that is specific to that option.  These “decorate” functions will end up doing various things depending on the option; they’ll include adding html elements to the toast, overloading library functions, adding event listeners for actions like “show toast”, “hide toast”, etc…

Once completed these functions should look something like this.

title: function (value, toast) {
    var titleElement = $('<div/>');
    titleElement.append(value).addClass(toast.options.titleClass);
    toast.element.append(titleElement);
},
message: function (value, toast) {
    var messageElement = $('<div/>');
    messageElement.append(toast.options.message)
        .addClass(toast.options.messageClass);
    toast.element.append(messageElement);
},
iconClass: function (value, toast) {
    toast.element.addClass(toast.options.toastClass)
        .addClass(toast.options.iconClass);
},
timeOut: function (value, toast) {
    toast.element.on('toastDisplayed', timeoutToastDisplayed);
    toast.element.on('stickAround', function () {
        clearTimeout(toast.intervalId);                          
    });
    toast.element.on('delayedHideToast', timeoutDelayedHideToast)                
},

With the list of functions created, once a toast is requested, I can iterate through all the options for that toast and call the “decorate” methods that apply for the given options.

I could also make use of a simple “Decorator” class in javascript

function Decorator(action) {
    this.action = action || function () { };
    this.decorate = function (fn) {
        var t = this.action;
        this.action = function () { t(); fn() }
    }
}

With the use of that class I can now replace the existing personalizeToast function

function personalizeToast() {
    setIcon();
    setTitle();
    setMessage();
    setCloseButton();
    setProgressBar();
    setSequence();
}

With the call below

personalizations.action();

The toastr website states “The goal is to create a simple core library that can be customized and extended”By implementation this type of decorator into the toastr library it could very easily be adapted to a plugin architecture.  I can imagine several extensions for things like media, video, effects, forms, etc…

That’s it for my review of the toastr library, I hope it was informative.  I personally learned a few things by looking at this library and felt I could have written a lot more on it.  Feel free to contact me with any questions or comments.

Are you doing software development the right way?

Over the fifteen plus years of my career as a software development consultant I’ve seen it all.  I’ve worked in the private sector, public sector, global delivery teams, start-ups and conglomerates.

I could write megabytes of text on the different tools, frameworks and methodologies I’ve seen.  Object orient programming? Rapid application development? Rational unified process?  Code generation, swing, csla, scrum, kanban, don’t tease me I’ll fill your hard drives!

Throughout all these environments, methodologies and tools you might wonder if one model is better than another.  Is there a right way of doing software development?

It does feel like this question is always present. Here’s my insight, successful software development starts with knowledgeable people.  Knowledge is not product, it’s not something you buy, and you don’t run to the store once you run out of it. Knowledge must be continually developed.

How much emphasis does your organization put into personal development?  Have you ever heard these words, uttered with a low voice? “We don’t have a lot of money for training this year”.  Minimal budget doesn’t mean you have to limit training to online learning site subscriptions.

Here are a few small cost-efficient tips that could make a big difference in your organization:

  • Engage the people you have. Chances are there are people in your organization with valuable knowledge they can share.
  • Promote “show and tells”, team blogs, demonstration applications. Small rewards such as a free lunch or leaving early on a Friday can be a very effective motivator.
  • Write more unit tests! The ability to write code that is easily testable is a very valuable skill to learn and develop.  Furthermore writing unit tests is a great way to gain exposure to new technologies and systems without necessarily being on the critical path of the development or maintenance effort.  I’ve had a tremendous amount of success bringing along new team members to a project by getting them involved in the creation of unit tests.
  • Invite a developer from outside the organization to speak about a technology or methodology your team is interested in. Interested in mocking frameworks, object relational mapping tools such as entity framework, wish you knew more about those nifty linq statements, wonder what a day on a scrum team looks like? Find someone in your network with this type of experience and tap into their knowledge for a few hours.  Learning about a technique is great, learning how that technique can be applied in a real-world scenario is even better.
  • Perform code reviews regularly. Code reviews are a great way to learn about different techniques and patterns. Afraid code reviews might cause friction between team members? Try reviewing code from an open source project instead.

Is there a right way of doing software development? If this question is being asked in your organization, change the conversation.  Focus on gaining knowledge as a whole instead of looking for one prescribed recipe.  You will find that much of the literature on software development is really a description of what knowledgeable teams do instinctively.