Jan 20, 2016

OpenStack Swift client for .NET Core


After evaluating several object stores, we came to the conclusion that OpenStack Swift was the one closest to our needs. Swift is described as "a highly available, distributed, eventually consistent object/blob store".
Having made the choice, we now needed to find a suitable client for interacting with our cluster programmatically.
It needed to meet the following criteria:
 - Open-source
 - Written in .NET
 - Recent community activity
 - Support for .NET core should at least be on the roadmap

Having found none, we decided to write our own. We're introducing SwiftClient, an open-source .NET client for OpenStack Swift that covers most of the Swift API, handles authentication and large object streaming. Besides the Swift .NET client, the project contains an ASP.NET 5 demo and a cross-platform console application suitable for bulk upload of an entire directory tree and large object operations. Our build process runs on Ubuntu and Windows, testing is done against a Swift docker container before each publish on NuGet.

Any contributions are welcomed, fell free to open an issue on our GitHub repository to get in contact with us.

Nov 10, 2014

Test ‘til you drop


Testing may seem ugly to you, yet somebody has to do it, otherwise that lovely site you spent months to develop will look lovely only to a fraction of your actual customers. And yes, everything stands in the details.

Beside functionality here’s what you must pay attention when you test your product:

  1. Speed – is your site loading fast enough, for both your local and international?
    Why do you have to test it? Because there’s a big gap between those two categories. Seconds long gap. And it becomes even more obvious when you’re on mobile, outside a Wi-Fi connection.
    So how can you test that? Try the website speed test from Pingdom and go advance settings to change the server location. The tool also provides information on how you can optimize your website for greater speed. On mobile just put your cellphone on 3G (yes, not 4G) and see what happens. Are you satisfied?
  2. Browsers – yes, good old browser testing. It may take time to test everything but it’s a must.
    You should look for both functionality and design problems. And don’t think that IE it’s your only adversary, if you work on Mac you should test Windows browsers and vice versa. If you don’t have the devices then welcome to Virtual Machines, because emulating IE 8 or 9 from IE11 it’s not accurate at all, so you’ll actually need to install that browser version on a VM. Or you can always try Browserstack , which works with local host too 
  3. Adblock – if you hope that everyone will see your beautiful banners and commercials you are mistaken. Welcome the most beloved browser plugin – Adblock Plus.
    Never tried on your own website? I’m sure your clients did. But why is this so important? Just have a look at the image bellow. While it doesn’t have an advertising area at the top the thing that I’m not seeing it’s actually their own nicely crafted banner. And not only that I’m not seeing it but it breaks the page as well. And btw, Adblock works on some mobile devices as well.
  4. Mobile – not testing on mobile isn't an option any more. But what can you do when emulators aren't good enough yet and your clients are using for sure what you don’t have.
    I recommend you buy at least one device for iOS, Android and Windows Phone, it doesn't need to be brand new if you can’t afford, just head to analytics to see what is the most common used device among your customers. Another option is to kindly ask your fellow coworkers or your friends to borrow their phones from time to time. This applies not only for native apps but as well for responsive websites. And keep in mind that the browser of choice for the mobile devices isn't always the default one on those operating systems.
  5. Color blind – almost 10% of the population has some sort of color blindness. That means that the color pallet you put together to highlight different parts of your website may not be seen very well. It’s maybe hard to understand if you don’t see it for yourself. Here’s a simulator where you can upload a screen capture of your app and see what color blindness really means. You can change your app for the better by using contrast not only color for those important parts.

There are plenty more test that you can do for your app or website but these five cover the basic of a good product: it should look good, work well and load fast.

Presentation image from freepick

Oct 20, 2014

Automated backup for PostgreSQL cluster on Windows with PowerShell


PostgreSQL base backup tool (pg_basebackup) was introduced with v9.1 and is primary used for creating a ready to go standby replica. Since v9.3 pg_basebackup supports WAL segments streaming via the -X stream option. With -X stream on, pg_basebackup will open a second connection to the server and start streaming the transaction log in parallel with the cluster files. The resulting backup file contains all the needed data to start a fresh replica or to restore the main cluster to it's original state. Version 9.4, currently in beta, will further improve pg_basebackup by allowing you to relocate not only the data store but also any table spaces you might have.

If you are running Postgres on Windows and you are looking for an automated way of doing full cluster backups then Backup-Postgres PowerShell script is a good starting point.

The Backup-Postgres script does the following:
  • checks if there is enough free space to make a new backup based on the last backup size (works only with a local backup path)
  • purges expired backups based on the supplied expire date
  • creates a new folder for each backup inside the root backup directory (the root path can be defined as local or network share)
  • calls pb_basebackup tool to begin a tar gzip backup of every table space, along with all required WAL segments (via "--xlog" option)
  • writes any encountered errors to Windows Event Log
  • writes backup elapsed time to Windows Event Log
The script can be download from TechNet Gallery or from GitHub Gist.

Configure PostgreSQL server

I'm assuming you have PostgreSQL version 9.3.x or newer installed on Windows Server 2012 or newer.

In order to make a base backup of the entire cluster using pg_basebackup tool you'll have to configure your server for streaming replication. PostgreSQL base backup uses the replication protocol to make a binary copy of the database cluster files and WAL segments without interfering with other connected clients. This kind of backup enables point-in-time recovery and can be used as a starting point for a streaming replication standby server.

Enable streaming replication:

Open postgres.conf located (by default, on a 64-bit install) in C:\Program Files\PostgreSQL\9.3\data\ and make the following changes:

wal_level = hot_standby
max_wal_senders = 3  
wal_keep_segments = 10
You should adjust the wal_keep_segments based on the amount of changes your server receives while in backup. By default, each WAL segment has 16MB, if you expect to have more than 160MB of changes in the time it will take to make the backup, then increase it.

Create a dedicated backup user in Postgres:

Open psql located in C:\Program Files\PostgreSQL\9.3\bin\, login as postgres and run the following command:

CREATE USER pgbackup REPLICATION LOGIN ENCRYPTED PASSWORD 'pgbackup-pass';
Allow streaming replication connections from pgbackup on locahost:

Open pg_hab.conf located in C:\Program Files\PostgreSQL\9.3\data\ and make the following changes:

host    replication    pgbackup    ::1/128    md5

Configure Windows server

Create a local Windows user named postgres. It doesn't need to have administrator rights, but it should have full access to the backup folder.
Log off from Windows and log in as postgres, navigate toC:\Users\postgres\AppData\Roaming\ and create a folder named postgresql. Inside postgresql create a file named pgpass.conf with the following content:

localhost:5432:*:pgbackup:pgbackup-pass
The pg_basebackup tool will look for this file to fetch the password.
Open Backup-Postgres.ps1 and modify the following variables to match your configuration:

# path settings
$BackupRoot = 'C:\Database\Backup';
$BackupLabel = (Get-Date -Format 'yyyy-MM-dd_HHmmss');

# pg_basebackup settings
$PgBackupExe = 'C:\Program Files\PostgreSQL\9.3\bin\pg_basebackup.exe';
$PgUser = 'pgbackup';

# purge settings
$ExpireDate = (Get-Date).AddDays(-7);
Now it's time to schedule the backup, open Windows Task Scheduler and create a new task. Setup the task to run whatever the user is logged on or not with highest privileges, use the postgres user for this. Add a recursive trigger, I've set mine to repeat every day indefinitely. You should carefully chose the best time to start the backup and that's when the server is less used. You should specify in the settings tab the rule Do not start a new instance if the task is already running, this will prevent running multiple backups in parallel.
Go to the Actions tab and add a new action:

powershell -ExecutionPolicy Bypass -File "C:\Jobs\Backup-Postgres.ps1"

Restore cluster from base backup

In order to restore a base backup with multiple table spaces, you'll have to extract each table space archive to it's original path. Since Windows doesn't have native support for tar.gz you can use the 7zip command line.
With 7zip you can extract a tar.gz archive without storing the intermediate tar file, 7zip can write to stdout and read from stdin using the following command:

7z x "base.tar.gz" -so | 7z x -aoa -si -ttar -o "C:\Program Files\PostgreSQL\9.3\data"
Restore steps:

1) Stop Postgres server
2) Delete the data folder content and all table spaces content (if you have enough free space, you should make a backup copy of the current data and table spaces)
3) Run the 7zip command and extract each archive to its corresponding folder
4) Create a recovery.conf file in data folder with the following content, specifying the postgres password:

standby_mode = 'on'
primary_conninfo = 'host=localhost port=5432 user=postgres password=PG-PASS'
5) Open pg_hba.conf file and comment all existing rules, this will prevent external clients from accessing the server while in recovery.
6) Start Postgres server. When Postgres starts it will process all WAL files and once recovery is finished the recovery.conf file gets renamed to recovery.done.
7) Restore pg_hba.conf to its original state and restart Postgres.

After getting used to the restore process you could automate it with PowerShell.

Oct 16, 2014

PostgreSQL unattended install on Windows Server with PowerShell


I often find myself in the situation where I need to install and configure PostgreSQL on a new VM running Windows. Because repetitive tasks are annoying and error prone, I've decided to automate this process as much as I can using PowerShell.

The Install-PostgreSQL PowerShell module does the following:
  • creates a local windows user that PostgreSQL will use (called postgres by default)
  • the password use for the creation of this account will be the same as the one used for PostgreSQL's postgres superuser account
  • creates postgres user profile
  • downloads the PostgreSQL installer provided by EnterpriseDB
  • installs PostgreSQL unattended using the supplied parameters
  • sets the postgres windows user as the owner of any PostgreSQL files and folders
  • sets PostgreSQL windows service to run under the postgres local user
  • creates the pgpass.conf file in AppData
  • copies configuration files to data directory
  • opens the supplied port that PostgreSQL will use in the Windows Firewall
The script can be download from TechNet Gallery or from GitHub Gist.

Usage

On the machine you want to install PostgreSQL, download Install-Postgres.zip file and extract it to the PowerShell Modules directory, usually located under Documents\WindowsPowerShell.
Open PowerShell as Administrator and run Import-Module Install-Postgres. Before running the unattended install you should customize the PostgreSQL configuration files located in Install-Postgres\Config directory.
You can also add a recovery.conf file if you plan to use this PostgreSQL cluster as a standby slave. All conf files located in Install-Postgres\Config will be copied to the PostgreSQL data directory once the server is installed.

Install PostgreSQL with defaults:
Import-Module Install-Postgres
Install-Postgres -User "postgres" -Password "ChangeMe!"
Install PostgreSQL full example:
Install-Postgres  
-User "postgres"  
-Password "ChangeMe!"  
-InstallUrl "http://get.enterprisedb.com/postgresql/postgresql-9.3.5-1-windows-x64.exe"  
-InstallPath "C:\Program Files\PostgreSQL\9.3"  
-DataPath "C:\Program Files\PostgreSQL\9.3\data"  
-Locale "Romanian, Romania"  
-Port 5432  
-ServiceName "postgresql"

Oct 6, 2014

UX practices for newly born websites


You just launched your website and now you're eager to see if users actually like what you've done, but hold your horses and don't jump to conclusions very fast.  Let the user enjoy what you have created before throwing them a big modal screen with a survey. You'll lose them twice as fast as you gained them. 

I’m not saying that you shouldn't care but you should make smart choices when it comes to user feedback in the first month or so of your product. 

Here's what you can do:

  1. Analytics – the good old friend, it should be there from the beginning. Watch for everything: audience, technology, bounce rate, referrals, direct search (for this I would suggest using webmaster tools as well), I mean everything. 
  2. Video recording from site – the next best thing, you can record everything a user does on your site, what he clicks, how he scrolls and if he has difficulties finding stuff on the page or completing a form. It's easier to understand user behavior from this than from a heat map tool. I'm using Clicktale for this job, you can try it for free or you can find other similar products
  3. A/B testing – this is the ideal time, right at the beginning, before users get comfortable with the way your product looks. Throw some color on those buttons baby, see what pops! 
  4. Social media – even if you don't have a Twitter/Facebook page (really, why don’t you?) people are still going to talk about you, so make sure you search for your brand's name across multiple social media platforms

It's been a month or more? OK, take your surveys, don't forget to add some incentive (it's not the end of the world to give them some discounts), call people, interview clients and so on. Never be too aggressive, aggressive is obsolete!

Sep 23, 2014

jQuery Unobtrusive client side form validation for ASP.NET MVC



Here at VeriTech we are using a modified version of Microsoft's jQuery Unobtrusive Validation, a support library for jQuery and jQuery Validate Plugin which is shipped with ASP.NET MVC since it's third release. It enhances client side validation, by using unobtrusive data-* validation attributes instead of generated code, like in previous versions. Thus, it is easily expandable and doesn't pollute your code.

We also maintain the BForms open source framework, making it the perfect platform to implement and publish the improvements that we added to the way in which validation works. Here I am going to list some of those changes and also explain the reasons behind them.
Even if you use client side validation you must validate your data on the server as well, by checking the ModelState.IsValid property. Also you should call the ValidationMessageFor HTML helper responsible for displaying the validation error messages from your ModelState. Sadly, this implies that you are going to redraw the form, adding an unnecessary overhead. That's why we made some small improvements to the flow of validating a form. First of all, we are using a modified version of the showErrors method from the jQuery validation plugin. We added a second boolean parameter which denotes if the viewport should scroll to the first form error, improving the user experience.
Another thing that we added was an extension method for the ModelStateDictionary class, which gathers all the model errors in an easy to use way.
 public virtual BsJsonResult GetJsonErrorResult()
        {
            return new BsJsonResult(
                new Dictionary { { "Errors", this.ModelState.GetErrors() } },
                BsResponseStatus.ValidationError);
        }
By using all the methods above and an overload of jQuery's ajax method, which includes some new callback methods, including one for server validation errors, we managed to give the users a seamless experience, without needing to replace the whole form to render the errors.
$.bforms.ajax({
                url: ajaxUrl,
                data: submitData,                              
                validationError: function(response){
                    var validator = $('.validated-form').validate();
                    validator.showErrors(response.Errors, true);           
                }               
            });
On that note, one of the most asked questions that I heard on the subject of unobtrusive validation is how are you supposed to handle new HTML added to your page. I found that the best way is to remove the data values added to the form, named "validator" and "unobtrusiveValidation" and calling the jQuery.validator.unobtrusive.parse method on the new content.
var $form = $('.validated-form');

$form.removeData('validator');
$form.removeData('unobtrusiveValidation');

$.validator.unobtrusive.parse($form);
Another thing that you are probably going to need at some time is some custom validation rules. There are two steps that you must complete to be able to do that. First, you have to create a new validation attribute, which inherits the the ValidationAttribute class and the IClientValidatable interface. After that you should call the jQuery.validator.unobtrusive.adapters.add method, which has the following parameters:
  • adapterName : The name of the adapter to be added. This matches the name used in the data-val-nnnn HTML attribute (where nnnn is the adapter name)
  • params:  An array of parameter names (strings) that will be extracted from the data-val-nnnn-mmmm HTML attributes (where nnnn is the adapter name, and mmmm is the parameter name)
  • fn : The function to call, which adapts the values from the HTML attributes into jQuery Validate rules and/or messages.
Don't forget to parse your form again to be sure that the new validation rules are added.
As you can see, you can manually add the data-val-nnnnn HTML attributes to your form elements without using a validation attribute, but then you are not going to be able to use the server side validation methods.

This is the way in which you can implement a custom rule for mandatory checkboxes (for example a Terms and Conditions agreement) :
//server side code using validation attributes
public class BsMandatoryAttribute : ValidationAttribute, IClientValidatable
{
    /// 
    /// Returns true if the object is of type bool and it's set to true
    /// 
    public override bool IsValid(object value)
    {
        //handle only bool? and bool

        if (value.GetType() == typeof(Nullable))
        {
            return ((bool?)value).Value;
        }

        if (value.GetType() == typeof(bool))
        {
            return (bool)value;
        }

        throw new ArgumentException("The object must be of type bool or nullable bool", "value");
    }

    public IEnumerable GetClientValidationRules(ModelMetadata metadata,
        ControllerContext context)
    {
        yield return new ModelClientValidationRule
        {
            ErrorMessage = this.ErrorMessage,
            ValidationType = "mandatory"
        };
    }
}
}
//client side validation methods
jQuery.validator.addMethod('mandatory', function (value, elem) {
        var $elem = $(elem);
        if ($elem.prop('type') == 'checkbox') {
            if (!$elem.prop('checked')) {
                return false;
            }
        }
        return true;
    });

jQuery.validator.unobtrusive.adapters.addBool('mandatory');

Another improvement that we added to the validator is the way in which validation errors are displayed. By default a label is added after your input which contains the raw error message. We replaced the label with a span containing a Bootstrap Warning Glyphicon. We also used the Bootstrap Tooltip plugin, so that you can see the validation by hovering over the exclamation mark, as you can see below:

For a live demo of a form validated using our modified version of Microsoft’s Unobtrusive validation you can check the  BForms Login Demo Page

Sep 5, 2014

Architecting an automatic updates system for windows services



We are working on a control and monitoring system for on-premises & cloud hosted servers and .NET applications. The monitoring is done by installing a windows service on all the virtual machines that are part of the system. The windows service’s job is to run (and keep open) a console app that collects data from the local machine and sends it to the monitor web service. After deploying the windows service to a dozen machines we realized that updating the collector component involves a lot of time and effort. So we've decided that we need an automated process that will detect when a new version of the collector component has been committed on Git, package the new version and apply it on all the servers.

Our automatic updates system is composed of a TeamCity server and a WebAPI service that collects the data, hosts the update packages and acts as a proxy between the TeamCity server and our windows services. We don’t want direct communication between TeamCity and the windows services since it would scale poorly (we can have multiple WebAPI services to distribute packages when we need to scale) and direct communication between the windows service host and TeamCity might not be wanted/possible.

TeamCity will detect changes in the Git repository of the collector component, pull the changes locally, build the collector project, increment the assembly version if the automated tests have passed, use MsBuild to build the project and archive the artifacts in a zip package. The package, containing the current version number in its name, gets deployed to the WebAPI service that stores it in its local storage.

Getting the update package from TeamCity to the update service can be done in several ways. We can configure TeamCity to deploy the WebAPI service along with the ZIP package via IIS Web deploy, but that would mean that every time a new update shows up the WebAPI service will have to be re-deployed and restarted. Another way to achieve this is to make the WebAPI service check the TeamCity API when the windows service asks it for updates. This is probably the best approach (taking development effort into account), and the downsides can be overcome by making the WebAPI service cache the results for a certain amount of time and only download a new update on the first request. Another approach would be to expose the updates folder as a network drive and XCopy the packages to it, but this requires setting up the TeamCity account to have write access to that folder.

There are several viable options for distributing the update package from the WebAPI service to the various windows services. The WebAPI can either push notifications to the windows service (using SignalR or Redis pub/sub), or the windows services can check for updates with the WebAPI service at some set interval. We’ve chosen to have the windows service poll the WebAPI for updates since it’s easier to implement and it’s not critical for us to update the collector component exactly at the moment a new version is available. If, however, we’ll need to make sure the collector is always up to date, the WebAPI service (which not only acts as an update agent, but also collects all the data sent from the collector) can check the collector’s version on each call it receives, and if a new one is present, tell the collector to signal the windows service that an update is needed.
The window service, once it downloads an update, waits for the collector console to finish the running job and shuts it down, unzips the package and overwrites all the files it contains, then starts the new version of the collector.

Since we’re distributing executables to many mission-critical machines, security is a big issue. Assuming the TeamCity server is secure from external attacks, the problem then lies with the updater service. If a WebAPI update service is compromised, then any machines requesting updates from it will be too. In order to mitigate this, the executables delivered by TeamCity need to be signed with a certificate that enables strong encryption. The windows service will then have to verify that the collector executable is signed before launching it.


We're interested to hear your thoughts on our design, improvements are always welcome.