Posts Tagged With TipsAndTricks - Musing, Rants & Jumbled Thoughts

Header Photo Credit: Lorenzo Cafaro (Creative Commons Zero License)

I've been a big fan of Rhino.Mocks for many years, but the original Rhino.Mocks maintainer (Oren Eini aka Ayende) stated in 2013 that he is no longer maintaining the project and has handed the project over to a new maintainer (Mike Meisinger), but Mike has not been actively maintaining the project. It's effectively dead and very unlikely to support future versions of .NET, including .NET core.

A while back I wrote a Quick Guide to Using Rhino.Mocks and in last week's post, I gave an overview of Moq for Rhino.Mocks Users. While Moq seems to be the current front runner as a Rhino.Mocks replacement, there's another player in town with NSubstitute. So in the same vane as my Moq post, I'm providing a guide to help people make the transition (or at least help with a review) from Rhino.Mocks to NSubstitue.

The format of this post will follow that of the others to help with cross-referencing between them.

Overall, I'm not a big fan of NSubstitute's syntax, as it looks too similar to just calls to the methods themselves, which:

  • makes it harder to understand, as the reader must know which calls are setting up stubs and which are real calls.
  • makes it much more likely you'll accidentally do the wrong thing and call a real method.

Now, if you're only mocking interfaces, then this wouldn't be an issue -- but rarely do I find a codebase that allows everything to be an interface.

Generating Different Mock Types

By default, NSubstitue's .For<T> method generates something close to a Dynamic mock -- that is, if you don't explicitly provide a return value, it will return the default value for the data type of the return value if it's a value type (0 for numbers, false for bools, empty string for string), but for object types, it's a bit more complicated. From their documentation:

...any properties or methods that return an interface, delegate, or purely virtual class* will automatically return substitutes themselves.

A pure virtual class is defined as one with all its public methods and properties defined as virtual or abstract and with a default, parameterless constructor defined as public or protected.

So it won't return null in those cases. Instead, it creates a new mock of the return type. Otherwise, it will return null.

Be careful, thought -- if you are using a real class (not an interface), it will call the underlying methods if they aren't virtual.

That said, there is the concept of a partial mock using .ForPartsOf<T> -- that is, a mock that can use the underlying object's implementation. However, to prevent calling the real methods for the methods you do want to mock, you must use the syntax described in the Advanced Argument Constraints section below.

Rhino.Mocks NSubstitute
Dynamic-ish Mock IFoo mock = MockRepository.GenerateMock<IFoo>(); IFoo substitute = Substitute.For<IFoo>();
Partial Mock IFoo mock = MockRepository.GeneratePartialMock<IFoo>(); IFoo substitute = Substitue.ForPartsOf<IFoo>();

There isn't an option to create a Strict mock in NSubstitute, and it looks like there won't ever be one.

Passing Constructor Arguments

If you need to pass arguments to the constructor of the class you're trying to mock, there are overloads to allow you to do that.

Rhino.Mocks NSubstitute
IFoo mock = MockRepository.GenerateMock<SomeClass>(param1, param2); Substitue.ForPartsOf<SomeClass>()(param1, param2);

Stubs vs Mocks (or not)

The syntax for NSubstitute is a little different than the others. In most cases, there's not an explicit method you call to create the mock/stub -- rather, you basically call the method you want to create a substitue for. Also, there's no distinction between a mock and a stub -- they're all just substitutions. But since they don't create _expectations_, I would classify them as stubs.

You use the .Returns() method to setup the stub. Note that you can provide multiple values to .Returns() which will set up a chain of calls, like this: .Returns(valueForFirstCall, valueForSecondCall, valueForThirdCall). This works for methods and properties.

For Methods:

Rhino.Mocks NSubstitute
Stub mock.Stub(x => x.SomeMethod()).Return(true); substitute.SomeMethod().Returns(true);
Mock mock.Expect(x => x.SomeMethod()).Return(true); (not supported)

For Properties:

Rhino.Mocks NSubstitute
Stub mock.Stub(x => x.SomeProperty).Return(true); substitute.SomeProperty.Returns(true);
or
substitute.SomeProperty = true;
Mock mock.Expect(x => x.SomeProperty).Return(true); (not supported)

Unlike Rhino.Mocks, however, if some other code sets the property value to something else, NSubstitute's stub will return the new value, not the value you stipulated in your stub. In other words, NSubstitue's properties will act like regular properties, and your stub just sets an initial value.

Verifying expectations

Since you're not creating expectations with NSubstitute, there are no mass validation options. Instead, you need to check each stub (which, being more explicit, is probably the better route anyway).

For Methods:

Rhino.Mocks NSubstitute
Called mock.AssertWasCalled(x => x.SomeMethod()); substitute.Received().SomeMethod();
Called a specific number of times mock.AssertWasCalled(x => x.SomeMethod(), options => options.Repeat.Times(2) ); substitute.Received(2).SomeMethod();
Not called mock.AssertWasNotCalled(x => x.SomeMethod()); substitute.DidNotReceive().SomeMethod();
or
substitute.DidNotReceiveWithAnyArgs().SomeMethod();

For Properties:

Rhino.Mocks NSubstitute
Get mock.AssertWasCalled(x => x.SomeProperty); var temp = substitue.Received().SomeProperty;
Set mock.AssertWasCalled(x => x.SomeProperty = true); substitue.Received().SomeProperty = true;
Not called mock.AssertWasNotCalled(x => x.SomeProperty = true); substitue.DidNotReceive().SomeProperty = true;
or
substitue.DidNotReceiveWithAnyArgs().SomeProperty = true;

Note that for the getter, you must set a variable to the return value to prevent a compiler error.

The WithAnyArgs versions will ignore whatever parameters (for methods) or set values (for properties) you use in your check and will verify against any inputs.

Advanced Argument Constraints

Both frameworks provide ways to put advanced constraints on the arguments that trigger a mock. Below are examples -- you'll need to consult the framework documentation for the full list of available constraints.

Rhino.Mocks

 mock.Stub(dao => dao.GetRecordFromDatabase(
                         Arg<int>.Is.GreaterThanOrEqual(0),
                         Arg<decimal>.Is.NotEqual(2.0),
                         Arg<List<string>>.List.ContainsAll(new List<string> { "foo", "bar" }),
                         Arg<object>.Is.NotNull,
                         Arg<object>.Is.Anything))
                 .Return(recordFromDatabase);

NSubstitute

  substitute.GetRecordFromDatabase(
                         Arg.Is<int>(i => i >= 0),
                         Arg.Is<decimal>(d => d != 2.0m),
                         Arg.Do<int>(x => capturedValue = x ),
                         Arg.Any<object>()))
         .Returns(recordFromDatabase);

Or, to ignore the arguments altogether, use .ReturnsForAnyArgs(). This is similar to Rhino.Mocks' .IgnoreArguments() method.

Providing a Method Implementation / Using the Input Params

For Rhino.Mocks, in order to provide an alternate implementation, you use the .Do() method to provide a delegate. In NSubstitue, you provide a delegate to the .Returns() call.

Rhino.Mocks

  mock.Stub(dao => dao.GetRecordFromDatabase(0))
                .IgnoreArguments()
                .Repeat.Any()
                .Do((Func<int, ImportantData>)(input => new ImportantData
                {
                    Name = "Orignal Name",
                    RecordId = input
                }));

NSubstitute

           
            substitute.GetRecordFromDatabase(0)
                      .ReturnsForAnyArgs(input => new ImportantData
                       {
                           Name = "Orignal Name",
                           RecordId = input.ArgAt<int>(0)
                       });

Throwing an Exception Instead

If the return type of the method is not void, you just provide a delegate to .Returns() that throws an exception. However, if the return type is void, then you use a .When().Do() syntax instead:

Rhino.Mocks NSubstitute
mock.Stub(x => x.SomeMethod()).Throw(new Exception("POW!")); substitute.SomeMethod().Returns(x => { throw new Exception("POW!"); });
or
substitute.When(x => x.VoidMethod()).Do(x => { throw new Exception("POW!"); });

Testing Non-Public Members

With Rhino.Mocks, you can’t mock private or internal members, but you can mock internal members if you add an InternalsVisibleTo attribute for the Castle dynamic proxy assembly. NSubstitute also uses the same proxy, so you'll still need to add the attribute. See the Moq Quickstart Guide for details on how to do this.



I've been a big fan of Rhino.Mocks for many years, but the original Rhino.Mocks maintainer (Oren Eini aka Ayende) stated in 2013 that he is no longer maintaining the project and has handed the project over to a new maintainer (Mike Meisinger), but Mike has not been actively maintaining the project. It's effectively dead and very unlikely to support future versions of .NET, including .NET core.

Moq (pronounced "Mock") has become the de facto replacement.

This post hopes to serve as a guide for people making the transition from Rhino.Mocks to Moq.

One big difference between Rhino.Mocks and Moq is what gets returned when you generate the mock and how you operate against it. With Rhino.Mocks, the MockRepository returns to you an instance of the type you're mocking and you apply the mock operators (.Stub(), .Expect(), .VerifyAllExpectations(), etc) directly to that mocked object. Moq, on the other hand, creates a wrapper object that contains a reference to the mocked type.

So in this example, I'm creating an mock of IFoo. For Rhino.Mocks, I get back an IFoo object, but Moq returns a Mock<IFoo> and to get the IFoo object, you access it with the .Object property.

Rhino.Mocks Moq
IFoo mock = MockRepository.GenerateMock<IFoo>(); Mock<IFoo> mockWrapper = new Moq.Mock<IFoo>();
IFoo mockedObject = mockWrapper.Object;

When using Moq, if you have a reference to the mocked object, you can get back to the wrapper object with this helper method: Mock<IFoo> mockWrapper = Mock.Get(mockedObject);

Generating Different Mock Types

For those of you that read my Using Rhino.Mocks Quick Guide, you may recall there are three types of mocks that can be generated by Rhino.Mocks:

Strict Mock
A strict mock requires you to provide alternate implementations for each method/property that is used on the mock. If any methods/properties are used which you have not provided implementations for, an exception will be thrown.
Dynamic Mock
With a dynamic mock, any methods/properties which are called by your tests for which you have not provided an implementation will return the default value for the data type of the return value.  In other words, you'll get back a 0 for number types, false for Booleans and a null for any object types.
Partial Mock
A partial mock will use the underlying object's implementation if you don't provide an alternate implementation.  So if you're only wanting to replace some of the functionality (or properties), and keep the rest, you'll want to use this.  For example, if you only want to override the method IsDatabaseActive(), and leave the rest of the class as-is, you'll want to use a partial mock and only provide an alternate implementation for IsDatabaseActive().

Note that Moq uses the term "Loose Mock" for the Dynamic mock concept. Both frameworks default to Dynamic\Loose mocks.

Here's how you generate the same concepts in Moq:

Mock Type Rhino.Mocks Moq
Strict Mock IFoo mock = MockRepository.GenerateStrictMock<IFoo>(); Mock<IFoo> mockWrapper = new Moq.Mock<IFoo>(MockBehavior.Strict);
Dynamic\Loose Mock IFoo mock = MockRepository.GenerateMock<IFoo>(); Mock<IFoo> mockWrapper = new Moq.Mock<IFoo>();
or
Mock<IFoo> mockWrapper = new Moq.Mock<IFoo>(MockBehavior.Loose);
Partial Mock IFoo mock = MockRepository.GeneratePartialMock<IFoo>(); Mock<IFoo> mockWrapper = new Moq.Mock<IFoo>() { CallBase = true };

I'm not a fan of this syntax, because it lets you mix methods in ways that don't make sense, like a strict mock that calls it's base methods: Mock<IFoo> mockWrapper = new Moq.Mock<IFoo>(MockBehavior.Strict) { CallBase = true };. It's not clear what will happen in this scenario if I call a method I haven't explicitly mocked, because using two different inputs (a constructor argument and a property) to represent competing concepts leads to confusion. That said, I can tell you what happens: The Strict setting takes precedent and a runtime exception is thrown:

Moq.MockException : Class1.GetFoo() invocation failed with mock behavior Strict. All invocations on the mock must have a corresponding setup.

Passing Constructor Arguments

If you need to pass arguments to the constructor of the class you're trying to mock, there are overloads to allow you to do that.

Rhino.Mocks Moq
IFoo mock = MockRepository.GenerateMock<IFoo>(param1, param2); Mock<IFoo> mockWrapper = new Moq.Mock<IFoo>(param1, param2);
or
Mock<IFoo> mockWrapper = new Moq.Mock<IFoo>(MockBehavior.Strict, param1, param2);

Stubs vs Mocks

Again, from my Using Rhino.Mocks Quick Guide, you may recall that:

A stub is simply an alternate implementation. A mock, however, is more than that. A mock sets up an expectation that

  • A specific method will be called
  • It will be called with the provided inputs
  • It will return the provided results

In Rhino.Mocks, you used the .Stub() and .Expect() extension methods to generate your stubs and mocks directly off your mock object. Moq, on the other hand, uses the .Setup() method on the wrapper object to create both. By default, it will create a stub (no expectation), but if you add Verifiable(), it will generate the expectations (thus, becoming a mock).

For both frameworks, you can explicitly verify stubs, but if you want to do mass verification, you must create the expectations up front.

For Methods:

Rhino.Mocks Moq
Stub mock.Stub(x => x.SomeMethod()).Return(true); mockWrapper.Setup(x => x.SomeMethod()).Returns(true);
Mock mock.Expect(x => x.SomeMethod()).Return(true); mockWrapper.Setup(x => x.SomeMethod()).Returns(true).Verifiable();

Properties are a different story. In Moq, in addition to Mocks that carry expectations, you can generate stubs for properties that basically allow the properties to be set and have them return the values when the getter is called. You can do this for individual properties (and optionally provide an initial value) or you can do it for all properties with a single call using .SetupAllProperties().

Rhino.Mocks, on the other hand, doesn't provide the ability to track property values, so to get that same functionality, you'd need to use a callback (.Do() or .Callback()) and track the value yourself.

For Properties:

Rhino.Mocks Moq
Stub
(always return same value)
mock.Stub(x => x.SomeProperty).Return(true); mock.Setup(foo => foo.SomeProperty).Returns("bar");
Stub
(returns tracked value)
(must use a callback) mock.SetupProperty(f => f.SomeProperty);
Stub w/ initial value
(returns tracked value)
(must use a callback) mock.SetupProperty(f => f.SomeProperty, "bar");
Mock
(always return same value, create expectation)
mock.Expect(x => x.SomeProperty).Return(true); mock.SetupSet(foo => foo.SomePropertyName).Returns("bar");
mock.SetupGet(foo => foo.SomeProperty)

Verifying expectations

The concepts here are pretty similar. You can verify individual call patterns, or (if you created a Mock and not a Stub) you can verify all of the expectations you created in a single pass.

For Methods:

Rhino.Mocks Moq
Called mock.AssertWasCalled(x => x.SomeMethod()); mockWrapper.Verify(x => x.SomeMethod());
Called a specific number of times mock.AssertWasCalled(x => x.SomeMethod(), options => options.Repeat.Times(2) ); mockWrapper.Verify(x => x.SomeMethod(), Times.Exactly(2));
Not called mock.AssertWasNotCalled(x => x.SomeMethod()); mockWrapper.Verify(x => x.SomeMethod(), Times.Never);

For Properties

Rhino.Mocks Moq
Get mock.AssertWasCalled(x => x.SomeProperty); mockWrapper.VerifyGet(x => x.SomeProperty);
Set mock.AssertWasCalled(x => x.SomeProperty = true); mockWrapper.VerifySet(x => x.SomeProperty);
Not called mock.AssertWasNotCalled(x => x.SomeProperty = true); mockWrapper.VerifySet(x => { x.SomeProperty = true; }, Times.Never);

Mass Verification

Moq can do mass verification in two ways. If you have created a mock that sets up expectations using .Expect() in Rhino.Mocks or .Verifiable() in Moq, you can use Moq's .Verify() method to validate just those expectations. Moq also provides a .VerifyAll() method which will validate all of the mocks and stubs you've created with .Setup().

Rhino.Mocks Moq
Verify Mocks only mock.VerifyAllExpectations(); mockWrapper.Verify();
Verify Mocks and Stubs (not available) mockWrapper.VerifyAll();

Controlling Mock Behaviors

Here are some of the general behavior modifications in Rhino.Mocks and their Moq equivalents:

Rhino.Mocks Moq
Change how many times to use the mock use .Repeat():

mock.Expect(x => x.SomeProperty)
.Repeat.Times(2)
.Return(true);
use .SetupSequence():

mockWrapper.SetupSequence(x => x.SomeMethod())
.Returns(true)
.Returns(true)
.Throws( new Exception("Called too many times"));
Ignore arguments use .IgnoreArguments():

mock.Expect(x => x.SomeMethod("param"))
.IgnoreArguments()
.Return(true);
use argument constraints:

mockWrapper.Setup(x => x.SomeMethod(It.IsAny<string>()))
.Returns(true);

Advanced Argument Constraints

Both frameworks provide ways to put advanced constraints on the arguments that trigger a mock. Below are examples -- you'll need to consult the framework documentation for the full list of available constraints.

Rhino.Mocks

 mock.Stub(dao => dao.GetRecordFromDatabase(
                         Arg<int>.Is.GreaterThanOrEqual(0),
                         Arg<decimal>.Is.NotEqual(2.0),
                         Arg<List<string>>.List.ContainsAll(new List<string> { "foo", "bar" }),
                         Arg<object>.Is.NotNull,
                         Arg<object>.Is.Anything))
                 .Return(recordFromDatabase);

Moq

 mockWrapper.Setup(dao => dao.GetRecordFromDatabase(
                         It.Is<int>(i => i >= 0),
                         It.Is<decimal>(d => d != 2.0m),
                         It.IsRegex("[a-d]+"),
                         It.IsNotNull<object>(),
                         It.IsAny<object>()))
         .Return(recordFromDatabase);

Providing a Method Implementation / Using the Input Params

For Rhino.Mocks, in order to provide an alternate implementation, you use the .Do() method to provide a delegate. In Moq, the .Returns() method has an overload that lets you provide a delegate.

Rhino.Mocks

  mock.Stub(dao => dao.GetRecordFromDatabase(0))
                .IgnoreArguments()
                .Repeat.Any()
                .Do((Func<int, ImportantData>)(input => new ImportantData
                {
                    Name = "Orignal Name",
                    RecordId = input
                }));

Moq


            mockWrapper.Setup(dao => dao.GetRecordFromDatabase(It.IsAny<int>()))   
                .Returns((Func<int, ImportantData>)(input => new ImportantData
               {
                   Name = "Orignal Name",
                   RecordId = input
               }));

Throwing an Exception Instead

This is pretty much a drop-in replacement when creating the mock/stub. Where Rhino.Mocks uses .Throw() for this purpose, Moq uses .Throws();

Testing Non-Public Members

With Rhino.Mocks, you can’t mock private or protected members, but you can mock internal members if you add an InternalsVisibleTo attribute for the Castle dynamic proxy assembly. Moq also uses the same proxy, so you'll still need to add the attribute, but Moq has the added benefit of being able to mock protected members. See the Moq Quickstart Guide for details on how to do this.



At the office, I have two nice, big monitors so I can spread out my work. I have become so accustomed to this much real estate that when I work from home, not having two monitors becomes a noticeable hindrance to my productivity.

While I do have a second monitor at home to attach to my laptop, I primarily remote desktop into my machine at the office, and that meant going back to a single monitor -- until now!

With Windows 8 (or maybe it was 8.1), the remote desktop client allows you to utilize all your local monitors. It's really easy to use, too. Just check the checkbox!

Update: I'm told this works in Windows 7 as well -- seems this was a well-kept secret.



I put this together last year while looking into ways to improve the amount of time it took to download an approximation 45MB payload SOAP response from a Microsoft Dynamics CRM service (ie: a giant XML document).

In the process, I found surprising results at how much better the user experience was when you combine GZIP compression with SSL encryption.

While the below write-up is specific to CRM in IIS, it should apply much more generally. I hope you find this helpful.

Summary / Real-World Proof

I implemented the below described ssl + gzip IIS configuration to speed up the metadata downloads on an internal server. While I saw no measurable difference in download times from my dev machine on the local network (which downloaded the metadata at approx. 25 seconds), my coworker, who is connecting over a VPN from two timezones away, saw download times go from approx. 5 mins before the change to approx. 30 secs after.

This can be contributed exclusively to the ssl + gzip config change, as we were already running just gzip on the one of our servers and just ssl on another, which were both taking the full amount of time to download metadata. It was only once I enabled gzip and ssl that the times dropped so significantly. Ultimately, this is due to the drastically reduced payload size (data going across the wire) when you combine those two technologies.

Overview

After some investigation on how to improve the download times for the CRM metadata, I think on of our best options is to suggest users enable dynamic compression for SOAP data, and utilize SSL. This will significantly reduce the payload size going across the network by ~96%, which represents the overwhelming majority of the user's wait time.

Findings

Out-of-the-box, Dynamics CRM will enable the dynamic (GZIP) compression setting for the web interfaces (including WCF services), but IIS7’s default configuration does not consider SOAP to be compressible. You must manually add SOAP to the list of dynamicTypes, which is a host-wide config change. Further, enabling SSL with compression significantly reduces the payload size.

Estimated download payloads and timings:^

  • Default install (IIS7, no dynamic compression, no SSL): 44.5 MB = 8min
  • With GZIP compression for SOAP: 33 MB = 6 mins
  • With SSL only: 33 MB = 6 min
  • With GZIP and SSL: 1.5 MB = 17sec

^Times are best-case, assuming you’re using a network connection with 768Kbps (.09MBps) download speed, the average DSL speed in America. Actual times will likely be slower.

That’s not a typo – enabling both SSL and GZIP took the time down to 17 seconds, or ~3.5% of the original time.

How To:

Step 1: Enable dynamic compress for soap data in the IIS applicationHost.conf

Enable compression by manually updating the ApplicationHost.Config

  • On the CRM Server Navigate to: C:\\Windows\\System32\\Inetsrv\\Config\\applicationHost.config and open it with notepad.
  • Search for the Section: <dynamicTypes> and in that section you should fine an entry that looks like this: <add mimeType="application/x-javascript" enabled="true" />
  • Below that, add the following line: <add mimeType="application/soap+xml; charset=utf-8" enabled="true" />
  • Save the file and reset IIS for the setting to take effect.

Step 2: Ensure dynamic compression is enabled for the Dynamics service:

Note: This should already be enabled in the default configs, but may have been changed by sysadmin

In IIS Manager, open the compression settings for the host:

Ensure dynamic compression is checked.

Open the Dynamics site compression settings:

Ensure dynamic compression is enabled:

Step 3: Enable SSL using a self-signed cert

Follow these instructions to enabled SSL with a self-signed cert.

Step 4: Export the cert and install on desktop

The CRM SDK won't connect to a site with certificate errors, so if using an untrusted (self-signed) cert, you'll need to add it to the desktop's trusted certs.

In IIS Manager, from the Server Certifications page, click Export, select a location to save the file and enter a password.

Copy that file to your desktop machine and double-click the file, which should open the certificate import wizard.

Select Current User (or, to make the cert apply to all users on the machine, select Local Machine) and complete the wizard, using the same password when prompted as you entered on the server during export.

When prompted for which certificate store to use, select "Place all certificates in the following store" and browse to the "Trusted Root Certificate Authorities". Finish the wizard and agree to all of the security warnings (there may be several).

You may need to restart your desktop machine for the certificate settings to take affect.

References:



TeamCity supports using HTTPS access, however they don't provide instructions for configuring this - rather they point you to a set of third-party instructions which are difficult to piece together and are not really clear for people who 1) aren't familiar with Java and 2) are running on a Windows server. So in this post, I'm documenting the steps I followed to get a TeamCity 8.1 server up and running with an SSL cert purchased from a signing authority.

Step 1: Create a PKCS#12 Cert File

If you already have a version of your cert that ends with .p12 or .pfx, you can skip this step. Otherwise, you likely have a .cert, .cer or .crt file. You'll need to convert it to PKCS#12 format using the instructions I've provided in a separate post: Converting a SSL Certificate to PKCS#12 Format on Windows

I suggest placing the file in the /conf folder of your TeamCity installation.

Step 2: Configure the TeamCity server Connector

Open the /conf/server.xml file in your TeamCity installation folder with your favorite text editor and find the <Service name="Catalina"> section where it defines the <Connector> entry. Add an entry as follows:


    <Connector port="443" 
               protocol="HTTP/1.1" 
               SSLEnabled="true"
               scheme="https" 
               secure="true"
               clientAuth="false" 
               sslProtocol="TLS" 
               keystoreFile="C:/your.path/TeamCity/conf/exportedCert.pfx"
               keystorePass="yourpassword"
               keystoreType="PKCS12"
               maxThreads="150" 
               />

Where: - port is the listening port for HTTPS. The standard port for HTTPS is 443. - keystoreFile is the correct path to the .pfx file (hit, Shift-Right-Click the file and choose "Copy as path". Make sure to use forward slashes in your path here, not the standard Windows back-slashes. - keystorePass is the password for the cert (change yourpassword to your actual password)

Now save and restart the server!

If there were any issues, they will be logged into the /log/catalina*.log file, so take a look there if things don't "just work".

Also, don't forget to set the URL in the server's configuration page so that emails, etc, use the new URL.



I'm working on configuring a couple of different Java-based servers (SonarQube and TeamCity) to use HTTPS for connectivity, which is fairly easy if you have a PKCS#12 format cert file. In this post, I'll walk through an option (there are others) for converting a .cert, .crt or .cer file into the PKCS#12 format us built-in Windows certificate store.

Note: you'll need to be an administrator on the Windows machine you're using to do the conversion.

In the Windows Start page, type "Manage Computer Certificates" and open the MMC (or run MMC directly and add the cert snap-in). Right-click on the 'Personal' certs folder and choose the Import option from the All Tasks... menu. This will open the Certificate Import Wizard.

Select your certificate file, enter the cert's password and make sure to enable the "Mark this key as exportable" option. Finish the wizard.

Now, in the MMC, find the cert where you imported it and right-click on it. From the All Tasks... menu, choose Export to open the Certificate Export Wizard. If you don't see Export, go back and make sure you enabled "Mark this key as exportable" during the import process.

Walk through the export wizard and choose "Yes, export the private key".

When asked what format to export to, choose PKCS #12 and enable the "Include all certificates in the certification path" option. You'll be asked to set security for the cert -- select the password option and enter a password. Remember this password, you'll need it later when you configure the webservers to use this cert. Finish the export wizard.

You now have a .pfx file with your PKCS #12 formatted cert.



I was recently asked in an email if I knew of any tools that would translate the XML results file from the JetBrains ReSharper command line tool inspectcode into a human readable format, such as HTML. For your viewing pleasure, here was my response:

There aren't any tools out there yet to convert the XML to HTML, but the XML format is fairly simple, so I suspect it wouldn't take much to write your own.

The file is broken into two sections <IssueTypes> and <Issues>

Under <IssueTypes>, there's a collection of <IssueType> elements, which list all of the violation types that were discovered in your code. Each one has the following possible attributes:

  • Id
    • This is the unique identifier for the rule
  • Category
    • This is a general grouping for the rule types
  • SubCategory (optional)
    • some of the groupings are further split
  • Description
    • This is a general description of what the rule is checking
  • Severity
    • One of these values: ERROR, WARNING, SUGGESTION, HINT, DO_NOT_SHOW
  • WikiUrl (optional)
    • A link to a jetbrains webpage that has additional details about the rule

In the <Issues> section are the specific instances of rule violations found in your code. Under <Issues>, you will find a collection of <Project> elements, each with a Name attribute which will match your Visual Studio project name. Under each <Project> attribute will be a collection of <Issue> elements, each with the following attributes:

  • TypeId
    • A reference to the rule in the collection, where the TypeId here is a match for the Id in the element
  • File
    • The file path that contains the violation. This path is relative to the Visual Studio Solution's folder
  • Line
    • The line number in the file where the issue occurred
  • Message
    • A message about why this line of code violated the rule. The message is case-specific and often include the variable name or other context information specific to this line of code.
  • Offset
    • I'm 100% sure what this actually represents, but from what I can tell, it's the character range (offset from start of file) of the specific text in the file that violates the rule. In Visual Studio, this would be the text that is highlighted/underlined, etc. So a value of "1684-1722" would be the 1684th character in the file through the 1722nd character.

Hope that helps



I really like the Visual Studio 2012 “Dark” theme, but haven’t been able to use it much due to it setting the XAML designer background color to black.  Since most of my user controls have transparent backgrounds (allowing the underlying Window to set the bg color), I couldn’t see the elements on my control.

dark_designer

Then, Scott Hanselman posted about the Visual Studio 2012 Color Theme Editor, which allows me to customize the themes (or create my own).  The problem was that I couldn’t find the right element name to set the color for the designer background.  So now that I’ve finally found it (Cider –> ArtboardBackground), I’m posting it here so I don’t forget.  Enjoy!

theme_screenshot



Being in the tech industry, I occasionally get asked by family and friends to help with computer issues. Two items in particular come up, either because they asked, or more likely, because I bring it up:

  • Virus Protection
  • Backups

So, I decided to write down my thoughts for future reference.

Virus Protection:

Everyone should be running anti-virus software -- always.  Let me say that again: EVERYONE SHOULD BE RUNNING ANTI-VIRUS SOFTWARE. ALWAYS!

The Internet, being the giant series of tubes that it is, is great for sharing information, but is also a breeding ground for nasties.  You're first line of defense should be a network firewall.  Chances are you have a router, possibly built into your modem, sitting at the connection point between you and the Internet. In most cases, the router acts as a firewall too.  If it's a "NAT" router (that's Network Address Translation), it also provides an additional layer of protection by giving your computer a "private" IP address that's not directly accessible from the Internet at large.  This means the bad guys can't just attack your PC directly -- they have to find a way for your PC to come to them.  The bad news -- it's really easy to get you to come to them: email, social network sites, rouge ads on webpages, phishing links... you get the idea.

So, install an anti-virus package.  There are many, many choices, including Norton AntiVirus, AVG, avast!, McAfee, and many others.  But my recommendation is to use Microsoft Security Essentials.  It's free, it's integrated with Windows Update, and from my experience, it's just as accurate as the other guys and doesn't seem to hurt performance.

Now, it's not enough to just install the software.  You also need to keep it updated.  Out of the box, all of these programs are configured to auto-update their virus definitions.  In most cases, it's configured to do it in the middle of the night, one or more nights a week.  This is fine, as long as you keep your computer on all night.  But, if you turn off your PC, or have a laptop that turns off when you close the lid, make sure to change the settings to run during a time the computer is on.  It should be fine to run while you're using the PC.  If your PC isn't on frequently enough to have a set schedule, make sure to open the virus software (there should be an icon down by the clock in the lower left corner) and manually kick off the updates once a week.

I've been running Microsoft Security Essentials for years with good results; however my wife's laptop got hit pretty hard by a virus last year which was missed by Security Essentials and several other anti-virus packages I tried.  Eventually, after having to do a complete re-install of the machine twice, I found a secondary anti-malware package that did the job: MalwareBytes.  This program is not intended to be a replacement for your anti-virus, rather it's a supplement.  The designers do not try to catch the bad stuff that the anti-virus apps will find -- they target the stuff that's difficult for them to find.  It's a for-pay application (after a fairly long trial period) if you want real-time checking (which you probably do), but worth it if you find yourself frequently getting hit by nasty, slimy bits.

Note: "malware" (shortened for of "malicious software") is the larger category of viruses, worms, trojan horses, key loggers, root kits and other applications that intend harm or deception.  Most "anti-virus" software is really "anti-malware", as they protect against more than just viruses.

Backups:

Backups are an insurance policy.  You don't need them until you really need them, and by that time, it's too late.  For years, I went without backups, and lost many, many files to accidental deletes, hardware failures, viruses and just stupidity.

There are several levels of backup, and anything is better than nothing!

File backups:

If you just need to make sure a handful of files are safe from getting nuked, you have a lot of options. I prefer DropBox, which is a cloud-based storage application.  It also has the added benefit of keeping the files sync'd across multiple computers. So in my case, I have the DropBox client running on my home machine, my work machine, my iPad and iPhone, as well as on the web, and it keeps the files updated in all those places with no actions on my part.   And, it's free for 2GB, with options to buy more storage. You can also unlock additional storage by recommending it to friends (thus, the link above has my "recommend to friends" info in them -- Disclaimer: If you signup using that link, I'll get some extra space too).

Microsoft has a similar product called SkyDrive, which as of this week, has a limit of 7GB for the free account. And Google is expected to go live with their solution soon, so expect to see free account size limits increase and the three companies compete for customers.

File and System backups:

For most versions of Windows, you can setup the built-in backup -- you'll likely want to by an external USB or firewire (if you're PC supports it) harddrive to store the files.  For full system backups, you'll want a big drive, since it'll keep multiple copies of your current harddisk -- so shoot for 2x to 5x the currently used size of your main harddrive.  It's generally a bad idea to use a second partition on your main harddisk, since a disk failure will kill your data and your backups at the same time.

There's also Carbonite, an online backup solution. I've not used it, but have heard good things. **Update: I now use Carbonite, and like it as an off-site backup. Be warned, though: It chews up a lot of bandwidth, especially as it uploads your initial files.

Finally, there's what I've been using for a little over a year: Microsoft Home Server.  You can purchase what is basically a server in a box (with no screen) with various levels of hardware and configuration.  I built my own, opting for uber-protection using RAID-5 arrays for harddrive protection inside the box.  Home Server will easily configure your Windows machines to not only backup your files to the network device, but also perform full system images.  This allows you to do a full system restoration, including harddrive partitioning, by just booting from a CD or USB drive you create from the server, selecting which backup image you want to restore (by date backup was taken) and sitting back to watch for about 45 minutes while it does all the work.  It also provides a personal website that you can access from anywhere, using SSL (https) encryption, where you can get access to files stored on the server, get Remote Desktop access to any machines online at home that support it, and more.  This is definitely NOT the cheap route to go, and is overkill for most. Update: Microsoft has chosen not to continue the Windows Home Server product, suggesting people move to Windows Server Essentials for that support. However, unless you're willing to seriously earn your IT merit badge, I wouldn't suggest going this route.

Now, go make sure you've got anti-virus running, backups in place, and spread the word!



If you’re using Jenkins as your Continuous Build platform, but would like to have a tray icon a la Cruise Control or TeamCity, it is still possible!  The Jenkins folks have provided a compatibility option for CCTray that will provide you with build status and the ability to go to the build website, although most of the other functionality doesn’t work (such as forcing a build from CCTray).

For those not familiar with CCTray, it’s a little Windows app that shows a red/green/yellow status icon in your system tray representing your current build(s) status. It will also provide balloon notifications on build success/failure.

If you open the app by double-clicking the icon, you get the full build list:

So, how do you set this up?

First, install CCTray.Net from here: http://sourceforge.net/projects/ccnet/files/CruiseControl.NET%20Releases/ (you’ll want to click the newest version link and find the CCTray-specific installer).

Then, from the CCTray application, go to File -> Settings, select the “Build Projects” tab and click “Add”:

On the next screen click “Add Server”

Now, here’s the special sauce:

For the Build Server, select the “Supply a custom HTTP URL” option and enter the URL for your Jenkins server, followed by “/cc.xml”.  This is a special file generated by Jenkins to support Cruise Control monitoring.  And, for an extra benefit, if you have custom filtered views in Jenkins, you can use those too. Say, for instance, you have just your nightly builds in a view called “Nightly”, when you go to that view/tab in Jenkins, the URL will be something like “http://jenkinsServer/view/Nightly”, so just enter “http://jenkinsServer/view/Nightly/cc.xml” to only monitor those builds in the view.

After clicking OK, the next window will allow you to select which specific builds you want to monitor. Select them and click “OK”.  And, you’re done!



So I'm playing around with WPF/Silverlight and find it quite annoying that you need to provide the string-based name of a property when raising your INotifyPropertyChanged events.  So you end up with code like this:

public partial class MyClass : System.ComponentModel.INotifyPropertyChanged {

        protected internal void OnPropertyChanged(string propertyName) {
            if (PropertyChanged == null) return;
            PropertyChanged(this, new System.ComponentModel.PropertyChangedEventArgs(propertyName));
        }

        private decimal _myProperty;

        public virtual decimal MyProperty {
            get { return _myProperty; }
            set {
                if (_myProperty.Equals(value)) return;
                _myProperty = value;
                OnPropertyChanged("MyProperty"); // <-----  This is annoying
            }
        }
  }

The problem here is that you now have a string-based reference to your property name. If you ever change the name of your property, you must remember to change the string. And if you don't remember -- there are no build-time errors to stop you. Thus, there is risk to any refactoring efforts -- something which I like to avoid!

So, how do you fix this. Well, one fairly straightforward way is to use expression trees / lambas to derive the name of your property from an actual reference.  In the following code example, I've added a method that takes a lamba and returns the string-based name of the property. 

private string GetRefactorproffPropertyName<T>(Expression<Func<T>> property) {
       LambdaExpression lambdaExpression = (LambdaExpression)property;
      var memberExpression = lambdaExpression.Body as MemberExpression 
                                    ?? ((UnaryExpression)lambdaExpression.Body).Operand as MemberExpression;           
      return memberExpression.Member.Name;
}
       

Now I can modify my property setter like this:

public virtual decimal MyProperty {
      get { return _myProperty; }
      set {
           if (_myProperty.Equals(value)) return;
           _myProperty = value;
           OnPropertyChanged(GetRefactorproffPropertyName(() => MyProperty)); // <--- this is better (compile-time check)
     }
}

Some references: http://stackoverflow.com/questions/3558974/select-a-model-property-using-a-lambda-and-not-a-string-property-name

http://stackoverflow.com/questions/3567857/why-are-some-object-properties-unaryexpression-and-others-memberexpression



I've got another dev tool that I wanted to pass along:

A few months ago, Jon Skeet posted a tweet about a new tool he was using called NCrunch.  Since then, I've been playing with the tool and working with the author to resolve some of the issues (here and here) that were blocking it from working smoothly in the my environment.  I believe it's now to the point where my coworkers who wish to take advantage can do so and where I can promote it's use to the world.

NCrunch, at its core, is a TDD extension for Visual Studio.  It will run your unit tests in the background and provide real-time unit test results (no need to even save your file – runs as you type) by way of color-coded dots to the left of each line of code. (Green = passing, red = failing, black = not covered).  It will provide details for exceptions that are thrown and many other cool features.  This will allow you to get immediate feedback if changes you are currently typing break/fix any unit tests.

There's a very good demo video on the NCrunch homepage (about 6 minutes long) that I think is worth watching to get a feel for what the tool can do.

Key features (or at least "My favorites"):

  • Line-by-line, real-time status of unit test coverage
  • Context menu access to applicable unit test
  • Tool-tip/hover bubble with details on:
    • number of covering tests,
    • performance,
    • exception details/stack trace
  • Visual indicators for performance metrics (slow tests have yellow centers -- with transparency based on level of slowness)
  • Quickly run covering tests, debug into a given line
  • Ability to configure how much CPU it will use.
  • It's FREE!! (update: NCrunch will be going for-pay soon)

Cons:

  • Feature overlap with TestDriven.Net and ReSharper test runners (although, the need for those may go away if you use NCrunch)
  • Some rough edges still (see below), but the developer is very actively updating and fixing bugs, and very responsive to users on the forum, twitter, etc.

There are a couple of things to note:

  • NCrunch does a lot of background compilation and running unit tests. It appears to be smart enough to only compile/run tests that are affected by changes you are making. In any case, if you have a slow machine, you may want to disable the automatic testing and run in manual mode.  I have a very beefy development box (8 cores, 8GB memory) and don't see any issues (I also run ReSharper with full solution analysis mode with no issues).
  • NCrunch lets you designate which unit tests to run/ignore.  In our case, we have both unit tests and system tests (db dependent) in the same solution, so developers would want to enable the unit tests but ignore the system tests.  When you first enable NCrunch for a project it asks you if you want to ignore all tests – I'd suggest doing that and then using the Tests window (accessible from the NCrunch menu) to unignore the tests/assemblies you care about from the right-click context menu.
  • NCrunch has the option of running tests linearly or in parallel.  If your tests are written such that they do not have side effects and don't share singletons, etc., then you should be better off running in parallel. However, you run the risk of having tests interfere with each other.  For our code, we need to run the tests one at a time.(Update 2012-2-2: per the comment from the author, each test is run in a separate process, so no memory/static property sharing, so little risk to run the tests in parallel.)
  • NCrunch compiles each project in an isolated, shadow-copied environment in the background. There are some cases, though, where the Visual Studio configurations are such that NCrunch doesn't automatically determine all of the referenced assemblies that need to be copied. In those cases, you can flip a configuration setting to have NCrunch copy the output folder over into the shadow environment.  This resolves the issue, but does have a performance impact.   This shows itself as an error in the NCrunch Tests window with the message "Cannot register assembly XXX or one of its dependencies. The system cannot find the file specified."

To resolve this, you need to enable the "CopyReferencedAssembliesToWorkspace" option for the project by going to the NCrunch Visual Studio menu and choosing the Configuration option, selecting the project in the Configuration window and changing the property.



I had a situation today where I needed to modify an existing method that fetched a DataTable  from the Data Access Layer, modified and returned it as a DataView.  My task was to filter the rows in the DataTable based on a call into managed code (ie: not something that could be done at the db level).

Now, I'm somewhat new to DataTables, having used ORMs for most of my .Net experience, so this was actually more difficult than I initially expected.  I was hoping to just set a value in RowFilter and be done. Unfortunately, as best I can tell, RowFilter does not allow row-specific dynamic filters (ie: you cannot call into a method with each row).  Furthermore, the return value of the method needed to stay as a DataView, since I'm tasked next with backporting  the change to our production branch and need to greatly limit the scope of my changes.

So after some googling, I was able to craft a solution using LINQ, which I wanted to document here for future reference.  Obviously, the code has been changed to protect the guilty, which has the side effect of greatly simplifying the logic. 

I needed to reference System.Data.DataSetExtentions to have access to the DataTable LINQ extensions. 

Then I did this:


     public abstract bool SecretFilteringMethod(int someId);

     public DataView RetrieveFilteredRecords(int someId)
        {

           DataSet ds = DALServiceProxy.RetrieveRecords(someId);
           DataTable myTable = ds.Tables[0];

            // this is bound to a UI drop-list, so add some usability enhancing rows
             ds.SuspendColumnValidation();
        
            DataRow row1 = myTable.NewRow();
            row1[Consts.Columns.Name] = "-- Select One --";
            row1[Consts.Columns.ID] = -1;
            myTable.Rows.InsertAt(row1, 0);

            ds.ResumeColumnValidation();

            // unlike most LINQ methods, this returns an EnumerableRowCollection<T> instead of IEnumerable<T>
            var rowsAfterManagedCodeFiltering = myTable.AsEnumerable()
                .Where(dpRow => !dpRow.IsNull(Consts.Columns.ImportantField))
                .Where(dpRow => SecretFilteringMethod((int)dpRow[Consts.Columns. ImportantField]));

            return rowsAfterManagedCodeFiltering.AsDataView();
        }

The return value of AsDataView() is a LinqDataView object, which has its Table property set to the original DataTable, so the result is fairly close to what would happen if I'd set a RowFilter. However, instead of RowFilter, the RowPredicate property is set with the LINQ representation. 

One thing to note: RowPredicate and RowFilter are mutually exclusive, so if a consumer later tries to set RowFilter to further refine the view, it will erase the RowPredicate, thus falling back to the base DataTable's full set of records. 

Personally, I'd rather move away from the DataTable altogether and return an IEnumerable<T>, but that wasn't really a logistical option at this point. 

Additionally, if you have an EnumerableRowCollection<T>, you can create a new DataTable with only the rows in the collection with the .CopyToDataTable() extension method.

References:



After reading my Rhino.Mocks Quick Reference post, a colleague and I had a discussion about the proper way to validate expectations in Rhino.Mocks. Specifically, he questioned if my use of .VerifyAllExceptions() was correct for the ArrangeActAssert syntax, or, as he proposed, is it the "old" syntax, being replaced by the .AssertWasCalled() methods.

Since this is a person I respect, I decided not to smite him and instead took a mental note to do some quick google searches research. Having failed to actually find Ayende's opinion on the matter in the first 5 minutes, I did come across this post, which raised a point I did not realize had forgotten:

In Rhino Mocks, expectations on stubs are not verified; only mocks are verified. If an object is created with GenerateStub instead of GenerateMock, then its VerifyAllExpectations method doesn't do anything. This is non-obvious because the AssertWasCalled and AssertWasNotCalled methods on a stub will behave the way you want them to. In Rhino Mocks, a stub can keep track of its interactions and assert that they happened, but it cannot record expectations and verify they were met. A mock can do both these things. 

So, if you are using a Stub, you must not use .VerifyAllExpectations(), because it will always pass.

Now, one might argue (and many, many on the Internet do argue) that a Stub should not have any expectations, so you shouldn't be calling either method. Stubs are for providing inputs to allow the code under test to run and limit your test to just the code under test, while Mocks are uses to validate behaviour. (they are one of the tests). 

If someone does have a link to Ayende's opinion, please post as a comment or email/tweet me. I'll update this post if I find it.

Update: found this (http://ayende.com/wiki/Rhino+Mocks+3.5.ashx#ExpectExtensionMethod) where it seems Ayende expects people to use either case (ie: he doesn't state here that one is "proper").  I still seem to remember a blog he wrote stating a preference for one over the other, but I haven't yet found it).



Microsoft has release a new Visual Studio Power Tool called “Debugger Canvas” that looks to be a very useful way to debug your apps.  Basically, instead of jumping from file to file while stepping through your code in debug mode, it will layout the callstack as method “bubbles” on a single tab, with local variable information available for each bubble:




There’s a really cool video demo on the download page that I would suggest you watch.

As well as this overview blog\announcement by Mary Jo Foley

Which points to this announcement page



This post is a general review of the Rhino.Mocks syntax. While it is intended primarily as a quick reference for myself, I hope anyone who stumbles upon it can also find it useful.

Rhino.Mocks is a .Net mocking framework which is extremely useful for stubbing out objects in your unit tests to ensure you're only testing what you want to test and to allow you to control the environment/state/surroundings under which your tests execute. It is written by Ayende Rahien, the same guy that created nHibernate.  My typical dev unit testing environment would be Visual Studio (C#) + nUnit + Rhino.Mocks. You can either use the nUnit command line tools to run the tests or several good tools that integrate into Visual Studio, such as ReSharper, TestRunner.Net or, my favorite, NCrunch.

For readability, I suggest writing your tests using the Arrange, Act, Assert syntax.

This post is broken into several sections, starting with a general overview and then hitting on several specific use cases of interest:

  • Mock Options
  • Creating a Mock
  • Limiting the Scope of Your Tests
  • Stubs vs Mocks
  • Controlling Mock Behaviors
  • Providing a Method Implementation / Using the Input Params
  • Throwing an Exception Instead
  • Mocking a Property
  • Testing Non-Public Members

The full code for the snippets used in my examples can be found at the end of this posting.

You may also be interested in my posting about Mocking Objects with Restricted Access (internal/private), which has examples of using reflection to manipulate objects that can’t be mocked with Rhino.Mocks.

For additional features, such as raising events in your mock, see the official Rhino.Mocks guide.

Mock Options

Rhino.Mocks supports three basic types of mock objects:

Strict Mock
A strict mock requires you to provide alternate implementations for each method/property that is used on the mock. If any methods/properties are used which you have not provided implementations for, an exception will be thrown.
Dynamic Mock
With a dynamic mock, any methods/properties which are called by your tests for which you have not provided an implementation will return the default value for the data type of the return value.  In other words, you'll get back a 0 for number types, false for Booleans and a null for any object types.
Partial Mock
A partial mock will use the underlying object's implementation if you don't provide an alternate implementation.  So if you're only wanting to replace some of the functionality (or properties), and keep the rest, you'll want to use this.  For example, if you only want to override the method IsDatabaseActive(), and leave the rest of the class as-is, you'll want to use a partial mock and only provide an alternate implementation for IsDatabaseActive().

IMPORTANT: Rhino.Mocks can only mock/stub virtual members of a real class, so make sure the members you care about are virtual -- OR, event better, mock/stub an Interface, in which case you can do whatever you want.

There are also methods for generating Stubs (see “Mocks vs Stubs” section below).

Creating a Mock

To generate mocks, you'll use the static factory Rhino.Mocks.MockRepository. From this factory, you'll create your mock using one of a few generic methods:

  • GenerateMock<T> (for DynamicMocks)
  • GeneratePartialMock<T>
  • GenerateStrictMock<T>

where the T is the class/interface being mocked. The method parameters, if you provide any, will be passed to the object's constructor.

For example:

        var _mockDAO = MockRepository.GenerateMock<IDataAccess>();
        var _mockManager = MockRepository.GenerateStrictMock<IDataManager>(someParam);

As a general rule, I will generate all of my mocks in the test suite [SetUp] method to ensure everything is reset from one test to the next.  You'll see in my test fixture file in the Code Used In Examples section that I have done just that.

Limiting the Scope of Your Tests

Scenario:

I have a BLL object that I want to test, but it relies on a DAL object to fetch/update, etc records from the database. I want to test my BLL object in isolation from the DAL object, so I mock the DAL interface.  

Things to note in this example: I generate a stub for the dao's GetRecordFromDatabase() method so that when it's called with the recordId I care about, it will return my prepared value. This removes the dependency on the DAO layer (which is not even used since I'm mocking an Interface for the DAO) and ensures my test is controlling the inputs and outputs so I'm getting exactly what I want for my specific test condition.

            mockDAO.Stub(dao => dao.GetRecordFromDatabase(myRecordId))
                   .Return(recordFromDatabase);

Code:


    [Test]
    public void TestGetImportantDataAndUpdateTheName()
    {
        //Arrange
        int myRecordId = 100;
        var recordFromDatabase = new ImportantData
                                     {
                                         Name = "Orignal Name",
                                         RecordId = myRecordId
                                     };

        _mockDAO.Stub(dao => dao.GetRecordFromDatabase(myRecordId))
                .Return(recordFromDatabase);

        //Act
        var myRecord = _fancyBL.GetImportantDataAndUpdateTheName(myRecordId);

        //Assert
        Assert.AreEqual(myRecord.RecordId, myRecordId);
        Assert.AreEqual(myRecord.Name, "All Your Base Are Belong To Us");
    }

Stubs vs Mocks

A stub is simply an alternate implementation. A mock, however, is more than that. A mock sets up an expectation that

  • A specific method will be called
  • It will be called with the provided inputs
  • It will return the provided results

So when you setup a mock, you use the syntax .Expect() instead of .Stub().  Then, in your asserts, you can do .VerifyAllExpectations() on your mock to ensure reality matched your expectations.

In this example, the test will fail due to an ExpectationViolationException being thrown due to the Expect(101) not being called.


    [Test]
    public void TestExpectations()
    {
        //Arrange
        int myRecordId = 100;
        var recordFromDatabase = new ImportantData
                                     {
                                         Name = "Orignal Name",
                                         RecordId = myRecordId
                                     };

        _mockDAO.Expect(dao => dao.GetRecordFromDatabase(myRecordId))
                .Return(recordFromDatabase);

        _mockDAO.Expect(dao => dao.GetRecordFromDatabase(101))
                .Return(recordFromDatabase);

        //Act
        _fancyBL.GetImportantDataAndUpdateTheName(myRecordId);

        //Assert
        _mockDAO.VerifyAllExpectations();
    }

Update: see my post for more on VerifyAllExpectations vs AssertWasCalled methods Update: found a posting that I think does a good, simple explanation of Mock vs Stub, including a graphic!

Controlling Mock Behaviors

In the above example, I did _mockDAO.Stub().Return().  This will cause the mock object to return the provided value the first time it's called with the provided inputs.  Sometimes we want to change this behavior, thus the following modifiers can be used between the .Stub() and .Return() calls.

Change how many times to use the stub:

Using the .Repeat.Any(), .Repeat.Once(), .Repeat.Times(10) modifiers:

     _mockDAO.Stub(dao => dao.GetRecordFromDatabase(myRecordId))
             .Repeat.Any()
             .Return(recordFromDatabase);

Return the prepared value regardless of the input value:

Using .IgnoreArguments():

  _mockDAO.Stub(dao => dao.GetRecordFromDatabase(myRecordId))
          .IgnoreArguments()
          .Return(recordFromDatabase);

Advanced Argument Constraints:

You can provide very detailed conditions for when to use your return values by defining per-parameter constraints. For example, here I've said the input must be greater than 0.

       _mockDAO.Stub(dao => dao.GetRecordFromDatabase(Arg<int>.Is.GreaterThanOrEqual(0)))
               .Return(recordFromDatabase);

Here's an example with more than one parameter: (There's a lot more than this – IntelliSense is your friend)


_mockDAO.Stub(dao => dao.GetRecordFromDatabase(
                         Arg<int>.Is.GreaterThanOrEqual(0),
                         Arg<decimal>.Is.NotEqual(2.0),
                         Arg<List<string>>.List.ContainsAll(new List<string> {"foo", "bar"}),
                         Arg<object>.Is.NotNull,
                         Arg<object>.Is.Anything))
         .Return(recordFromDatabase);

Additionally, you can put constraints on properties of objects used as parameters. For instance, if the input parameter had a bool property "IsSomethingICareAbout" and you only wanted to provide a return value when that property is true, you could do this:

            _mockDAO.Stub(x => x.SomeMethod(myObject))
                    .Constraints(Property.Value("IsSomethingICareAbout", true)
                    .Return("foo");

You can put constraints on the input arguments in the same way:


            _mockDAO.Stub(dao => dao.GetRecordFromDatabase(0))
                    .Constraints(Is.GreaterThanOrEqual(0))
                    .Return(recordFromDatabase);

And constraints can be chained with boolean operators:


            _mockDAO.Stub(dao => dao.GetRecordFromDatabase(0))
                    .Constraints(Is.GreaterThanOrEqual(0) && Is.LessThanOrEqual(100) )
                    .Return(recordFromDatabase);

Note: Constraints must be listed in the order of the parameters (ie: the first set of constraints applies to the first parameter, the second set to the second param, and so on).  And a constraint must be provided for each parameter (you can do Is.Anything() as well).

Providing a Method Implementation / Using the Input Params

Instead of using a .Return() with a simple value, you can provide a full implementation of the method using the .Do() method. This also allows you to get access to the input parameters.  If you want, you can define a delegate and just call the delegate.  I prefer to use lamdas unless the method is really long.

So instead of my previous stub for GetRecordForDatabase which pre-configured a return value, I can do it on the fly:

_mockDAO.Stub(dao => dao.GetRecordFromDatabase(0))
                .IgnoreArguments()
                .Repeat.Any()
                .Do((Func<int, ImportantData>)(input => new ImportantData
                                                            {
                                                                Name = "Orignal Name",
                                                                RecordId = input
                                                            }));

Mocking A Property

If you need to mock a property instead of a method, the syntax is pretty much the same:

    [Test]
    public void TestProperty()
    {
        var mockedInterface = MockRepository.GenerateMock<IHasAProperty>();
        mockedInterface.Expect(x => x.StringProperty).Return("fooString");
        Assert.That(mockedInterface.StringProperty, Is.EqualTo("fooString"));
    }

Throwing an Exception Instead

Instead of .Return(), you can use .Throw() to force an exception:


    [Test, ExpectedException(typeof(NullReferenceException))]
    public void TestNeverCallThisMethod2()
    {
        //Arrange
        _mockDAO.Stub(dao => dao.GetRecordFromDatabase(0))
                .IgnoreArguments()
                .Repeat.Any()
                .Throw(new NullReferenceException());
               
        object inputValue = null;

        //Act
        _fancyBL.NullsNotWelcomeHere(inputValue);

        //Assert
        //nothing to do here -- ExpectedException
    }

Testing Non-Public Members

With Rhino.Mocks, you can't mock private or protected members, but you can mock internal members if you do a little extra work.

Specifically, you must allow your test assembly to access internal members of the assembly under test.  This means adding two InternalsVisibleToAttributes to the AssemblyInfo.cs file of the assembly under test: one for the unit test assembly and one for Rhino.Mocks' DynamicProxyGenAssembly2.  If you're using signed assemblies, you must put the full public key in the attribute.

You can get the public key for an assembly by using the sn -Tp yourAssembly.dll command in the Visual Studio Command Prompt.

For example: (no wrapping -- can't be any spaces in the public key)

[assembly: InternalsVisibleTo("DynamicProxyGenAssembly2, PublicKey=0024000004800000940000000602000000240000525341310004000001000100c547cac37abd99c8db225ef2f6c8a3602f3b3606cc9891605d02baa56104f4cfc0734aa39b93bf7852f7d9266654753cc297e7d2edfe0bac1cdcf9f717241550e0a7b191195b7667bb4f64bcb8e2121380fd1d9d46ad2d92d2d15605093924cceaf74c4861eff62abf69b9291ed0a340e113be11e6a7d3113e92484cf7045cc7")]
[assembly: InternalsVisibleTo("jwright.Blog.UnitTesting, PublicKey=00……ec")]

Code Used In Examples

Here are the full code files to use in order to run my example tests:

DataAccessObject.cs

using System;

namespace jwright.Blog
{
    public class ImportantData
    {
        public string Name { get; set; }
        public int RecordId { get; set; }
    }

    public interface IDataAccess
    {
        ImportantData GetRecordFromDatabase(int recordId);
        void NeverCallThisMethod();
    }

    public class DataAccessObject : IDataAccess
    {
        public ImportantData GetRecordFromDatabase(int recordId) 
                                          { throw new NotImplementedException(); }
        public void NeverCallThisMethod() { throw new NotImplementedException(); }
    }

}

FancyBusinessLogic.cs

namespace jwright.Blog
{
    internal class FancyBusinessLogic
    {
        private IDataAccess _dataAccessObject;
        internal IDataAccess MyDataAccessObject
        {
            get { return _dataAccessObject ?? (_dataAccessObject = new DataAccessObject()); }
            set { _dataAccessObject = value; }
        }

        public ImportantData GetImportantDataAndUpdateTheName(int recordId)
        {
            var record = MyDataAccessObject.GetRecordFromDatabase(recordId);
            record.Name = "All Your Base Are Belong To Us";
            return record;
        }

        public void NullsNotWelcomeHere(object input)
        {
            if (input == null) { MyDataAccessObject.NeverCallThisMethod(); }
        }

    }
}

FancyBusinessLogicTest.cs

using System;
using NUnit.Framework;
using Rhino.Mocks;

namespace jwright.Blog
{
    [TestFixture]
    public class FancyBusinessLogicTest
    {
        private IDataAccess _mockDAO;
        private FancyBusinessLogic _fancyBL;

        [SetUp]
        public void SetUp()
        {
            //reset all my objects under test and mocks
            _mockDAO = MockRepository.GenerateMock<IDataAccess>();
            _fancyBL = new FancyBusinessLogic {MyDataAccessObject = _mockDAO};
        }

        //-----
        // Tests will go here
        //-----
    }
}


I haven't been posting new blogs for a while after getting fed up with how difficult it is to get nicely formatted code snippets into Blogger. Using the site's rich text editor strips out much of the whitespace (tabs and carriage returns, in particular) making the code unreadable. I really just wanted to copy&paste my code from Visual Studio and keep the format, the font colors, etc.

I started the process of setting up my own WordPress instance (using MS WebMatrix), but was concerned about losing the links to my old posts, etc.

Then it occurred to me that Blogger has an email-to-post feature - and it supports HTML emails. So, I just need to craft my posts as rich text emails and submit them that way. Much easier!

Now if only I could draft HTML emails from my iPhone. At least I can edit them in another app and copy&paste into the mail client.

So, expect some blogs in the next week. I'm working on some unit testing related posts, beginning with a Rhino.Mocks overview (mainly for some of the folks at my new job).



A while back, one of my coworkers sent out a late-night plea for help with a unicode issue he was having. I thought I'd post the conversation here (slightly censored) for future reference:

Original email from coworker:

I’m looking for a little .NET help – particularly with the MailMessage class.    I’m pulling the contents of an HTML page which is in French (and displaying properly in the webpage) and sending it via an email.   I’m having difficulty getting the mail to use the correct encoding to show all the special characters correctly .   If anyone has any experience doing this –please let me know.

My reply:

Make sure you're setting the .BodyEncoding property to Unicode. If you're getting "?" chars where special chars should be, it may actually be a problem with the way you're importing the content (the HTML could be getting munged before going into the MailMessage body). Take a look here: http://bytes.com/topic/asp-net/answers/345431-sending-mail-message-unicode-text

Coworker initial reply:

Thanks – I’m going to take a crack at this tomorrow.   I’ve tried all the different encodings on the .BodyEncoding property but no luck.   So I think you are right – it might be the way I am pulling the HTML.       What a freaking pain.

And his follow-up:

You were right.   That worked.  I owe you a beer – remind me to buy it for you next happy hour.

That reminds me -- I need to collect on that beer.

Remember that if you don't handle Unicode correctly in every place you manipulate the bytes (from initial read to final write), you run the chance of munging the bits because some string or character library is assuming 8-bit chars or assumes the wrong encoding.  The key here was to ensure the initial read was reading it as Unicode -- something like this:

var myReader = new StreamReader(fileName, System.Text.Encoding.Unicode);


During a client project, we had a set of unit tests that connected to a webservice hosted on a client server which utilized Windows Authentication and would reject requests coming from our local dev environments (since we, as consultants, had laptops that weren't on the client's AD domain).  This resulted in false-negative tests and in my experience, when you start having "known bad" tests that always fail, people start to ignore failing unit tests, which is not what you want.

So, to keep from having false-negative tests, I modified that particular suite of tests to auto-ignore the tests if the current environment is not on the right AD domain.  This would keep them from showing as failures on our local dev environments, but they would fully run on our build server (vs. marking them as permanently ignored, where they wouldn't run on the CI server).  Here's what I did:

        /// <summary>
        /// Will dynamically mark a test as Ignored if the system running the test is not
        /// on the SOMECLIENT domain. This is because the API will fail authentication
        /// otherwise and will provide false negative results 
        /// </summary>
        public static void IgnoreTestIfOffDomain()
        {
            try
            {
                var domain = System.DirectoryServices.ActiveDirectory.Domain.GetCurrentDomain();
                var domainName = domain.Name.ToUpper();

                if (!domainName.ToUpper().Contains("SOMEDOMAINNAME"))
                {
                    Assert.Ignore("NOT RUNNING API TESTS from current domain " + domain.Name);
                }
            }
            catch (ActiveDirectoryOperationException)
            {
                Assert.Ignore("NOT RUNNING API TESTS from current domain (domain missing)");
            }
        }

Note that an ActiveDirectoryOperationException will be thrown if the system is not currently registered on a domain, thus the try/catch.

I put this into a method which I called in the TestFixtureSetup or individual test methods, depending if I wanted to ignore the whole suite or the individual tests.



When writing ASP.Net applications, you often have a want/need to cache data in the UI layer for reuse. Often this is used to improve performance (limit repeated database calls, for example) or store process state. Below is an overview of various ways to achieve this for various scenarios.

Executive Summary:

(ordered from shortest to longest typical duration)

Method Scope Lifespan When to Use
ViewState Current page/control and user (each page/control has its own viewstate) Across post-back You need to store a value for the current page request and have it retrieved on the next post-back.
Base Class Current page/user and it's child controls Current instance You need to store a value once per page request  (such as values from a database) and have access to it during the current page request only
HttpContext Current page/user and it's child controls Current instance You need to store a value once per page request  (such as values from a database) and have access to it during the current page request only
ASP Session Site-wide, current user, all pages/controls Duration of user's session You need to store a value once per user's visit to the site (such as user profile data) and have access to it from any code for the duration of the user's visit
ASP Cache Site-wide, All pages/controls, all users Until expires or server restarts You need to store data for access from any code for all users (such as frequently used, but rarely changed, database values -- such as a list of Countries for an address form).
Cookies Site-wide, current user, all pages/controls Until expires or browser deletes You need to store small data (such as user's uniqueId) from one visit to the next, or possibly across sites. Not for sensitive data!

ViewState:

If you’re an ASP.Net developer, you should have a firm grasp of ViewState and all its benefits and drawbacks. Basically, ViewState allows you to store data in a special hidden input field which is provided back to you when the user posts-back the page. This is similar to just using a hidden field, except that it is page-/control-specific (meaning, if you have a user control that is repeated on the page, each instance of the control can store its own ViewState data with the same key and get back its individual results). It also includes some basic security  and compression.

// Set ViewState value while rendering the page
protected override void OnPreRender(EventArgs e)
{
    base.OnPreRender(e);
 
    // Set ViewState value
    this.ViewState.Add(“MyComputedValue”, (IList)BLL.DoSomeComputationThatShouldOnlyRunOnce()); 
}
 
// After post-back, retrieve the value
protected void Page_Load(object sender, EventArgs e)
{
 
    IList myValue = (IList)this.ViewState[“MyComputedValue”];
    if (myValue == null)
    {
        myValue = BLL.DoSomeComputationThatShouldOnlyRunOnce();
    }
 
}

I would discourage the use of ViewState for storing anything more than just very small pieces of data, since this information is included in the rendered HTML and has to be downloaded/uploaded with each request, thus degrading performance. You can configure Viewstate to use Session for it’s storage and eliminate the need to include it in the page’s HTML, but if you’re going that route, why not just use Session directly for your caching location (see below)?

BasePage (shared base class):

A common pattern I’ve used on just about every project and highly suggest it for many reasons, is to have a "BasePage” class which inherits System.Web.UI.Page, then have all of my application pages inherit from BasePage. This allows the developers to create shared "shortcuts” in one location which are accessible from all of our UI layer code.

Among other useful shortcuts (like storing singletons, etc), you can create properties on your BasePage class for storing cached data during the current page invocation. For instance, if you’re using the ASP.Net membership providers, you can store the current authenticated user in your BasePage so that you’re not going to the database everytime you call Membership.GetUser()

Note too, that this pattern can be combined with the other patterns listed, such as having a property that reads/writes data from Session, Cookies, etc., allowing for reduced code duplication.

using System.Web;
using System.Web.Security;
 
namespace MyProject.WEB
{
    public abstract class MyBasePage : System.Web.UI.Page
    {
    
        /// 
        /// Cached reference to Membership.GetUser(); (Currently authenticated user, or null if not auth’d)
        /// From Membership.GetUser():
        /// Gets the information from the data source and updates the last-activity date/time stamp for the current logged-on membership user.
        ///

        /// A System.Web.Security.MembershipUser object representing the current logged-on user.
        internal MembershipUser AuthenticatedUser
        {
            get
            {
                if (_authedUser == null)
                {
                    _authedUser = Membership.GetUser();
                }
                return _authedUser;
            }
        }
        private MembershipUser _authedUser;
    }
}

To follow this further, you can create a ControlBase class for your user controls which has a typed reference to the BasePage:

namespace MyProject.WEB
{
    public abstract class MyControlBase : System.Web.UI.UserControl
    {
        protected MyPageBase BasePage
        {
            get { return (MyPageBase)Page; }
        }
    }
}

Now, from within your control, you can do this.BasePage.UserName to get the currently logged-in username without having to go to the database more than once per page rendering.

HttpContext:

You can use the HttpContext Items array to store values for the duration of the current page rendering (similar to the PageBase pattern above). Personally, I prefer using the PageBase pattern, but there are some cases where this isn’t possible, such as when your working within a CMS framework like SiteCore and don’t actually have access to the page. (SiteCore only allows you to create user controls and place them via their CMS framework).

/// 
/// Cached reference to Membership.GetUser(); (Currently authenticated user, or null if not auth’d)
/// From Membership.GetUser():
/// Gets the information from the data source and updates the last-activity date/time stamp for the current logged-on membership user.
///

/// A System.Web.Security.MembershipUser object representing the current logged-on user.
internal MembershipUser AuthenticatedUser
{
    get
    {
        if (HttpContext.Current.Items[“CurrentUser”] == null)
        {
            HttpContext.Current.Items[“CurrentUser”] = Membership.GetUser();
        }
        return (MembershipUser)HttpContext.Current.Items[“CurrentUser”];
    }
}
 

ASP.Net Session:

Using ASP.Net Session will provide a way to store data across page views for the duration of the user’s visit to the site. Be careful – I’ve seen many people get tangled up with stale session data, particularly on initial page loads. For instance, a user clicks on a button which will initiate an Add/Edit popup in edit mode for a product.  The developer stores the product info in Session, then opens the popup control, which checks Session for a product and goes into edit mode if product data exits.  The user changes their mind and closes the popup (but the developer forgets to clear the product from session in this case).  Then the user clicks the "Add new product" button, showing the same control which should be in "add" mode, but since there is a stale product in session, it enters edit mode for the previous product instead.   Make sure that if a user is returning to a page after previously storing page state in session that you correctly handle the potentially stale data.

public SortDirection LastSortDirection
{
    get
    {
        //Note: will return null if no value exists in Session
        return (SortDirection)HttpContext.Current.Session[“SortDir”];
    }
    set
    {
        HttpContext.Current.Session[“SortDir”] = value;
    }
}

ASP.Net Cache:

The ASP.Net Cache can be used to store objects for a predetermined amount of time across all page requests (ie: at the server level). This is useful for data read from the database that isn’t often changed, such as a list of options for a drop-down list.

internal List<String> DropDownListOptions
{
    get
    {
        if (HttpRuntime.Cache[“DropDownListOptions”] == null)
        {
            HttpRuntime.Cache.Insert(“DropDownListOptions”, DAL.GetListFromDatabase(), null, DateTime.Now.AddHours(24), System.Web.Caching.Cache.NoSlidingExpiration);
        }
        return (List<String>)HttpRuntime.Cache[“DropDownListOptions”];
    }
}

Cookies:

Cookies can be used to save data on the client side and have it returned to you on postback. Note, however, that unlike the other storage mechanisms, cookies have two different storage locations: one for the inbound value and one for the outbound value. So you can’t (at least, not without some additional logic) write a value, then read it again for use later in your page logic (your "read" will just re-get the original value, not the updated value). Generally, I would suggest reading the value at page load, storing it in a property on your page class, then writing it out again in your PreRender code.

Also note that not setting a cookie value on your response is not the same as deleting the cookie. The browser will keep the last cookie received until it expires or is explicitly overwritten.

Warning: Cookies are stored on the user’s machine, so don’t store sensitive data there and always validate the values you get back (it’s easy to tamper with the values). Encryption is suggested, as is setting the ".Secure” property to restrict transport to HTTPS.

private const string COOKIE_NAME = “MyCookie”;
 
/// 
/// Update the cookie, with expiration time a given amount of time from now.
///
public void UpdateCookie(List<KeyValuePair<string, string>> cookieItems, TimeSpan? cookieLife)
{
    HttpCookie cookie = Request.Cookies[COOKIE_NAME] ?? new HttpCookie(COOKIE_NAME);
    
    foreach (KeyValuePair<string, string> cookieItem in cookieItems)
    {
        cookie.Values[cookieItem.Key] = cookieItem.Value;
    }
    
    if (cookieLife.HasValue)
    {
        cookie.Expires = DateTime.Now.Add(cookieLife.Value);
    }
    Response.Cookies.Set(cookie);
 
}
 
public string ReadCookie(string key)
{
    string value = string.Empty;
    
    if (Request.Cookies[COOKIE_NAME] != null)
    {
        value = Request.Cookies[COOKIE_NAME].Values[key];
        //UpdateCookie(cookieName, value); //optional: update the expiration so it rolls outward
    }
    
    return value;
}
 
public void DeleteCookie()
{
    var cookie = new HttpCookie(COOKIE_NAME)
    {
        Value = string.Empty,
        Expires = DateTime.Now.AddDays(-1)
    };
    Response.Cookies.Set(cookie);
}