Nov 10, 2014

Test ‘til you drop


Testing may seem ugly to you, yet somebody has to do it, otherwise that lovely site you spent months to develop will look lovely only to a fraction of your actual customers. And yes, everything stands in the details.

Beside functionality here’s what you must pay attention when you test your product:

  1. Speed – is your site loading fast enough, for both your local and international?
    Why do you have to test it? Because there’s a big gap between those two categories. Seconds long gap. And it becomes even more obvious when you’re on mobile, outside a Wi-Fi connection.
    So how can you test that? Try the website speed test from Pingdom and go advance settings to change the server location. The tool also provides information on how you can optimize your website for greater speed. On mobile just put your cellphone on 3G (yes, not 4G) and see what happens. Are you satisfied?
  2. Browsers – yes, good old browser testing. It may take time to test everything but it’s a must.
    You should look for both functionality and design problems. And don’t think that IE it’s your only adversary, if you work on Mac you should test Windows browsers and vice versa. If you don’t have the devices then welcome to Virtual Machines, because emulating IE 8 or 9 from IE11 it’s not accurate at all, so you’ll actually need to install that browser version on a VM. Or you can always try Browserstack , which works with local host too 
  3. Adblock – if you hope that everyone will see your beautiful banners and commercials you are mistaken. Welcome the most beloved browser plugin – Adblock Plus.
    Never tried on your own website? I’m sure your clients did. But why is this so important? Just have a look at the image bellow. While it doesn’t have an advertising area at the top the thing that I’m not seeing it’s actually their own nicely crafted banner. And not only that I’m not seeing it but it breaks the page as well. And btw, Adblock works on some mobile devices as well.
  4. Mobile – not testing on mobile isn't an option any more. But what can you do when emulators aren't good enough yet and your clients are using for sure what you don’t have.
    I recommend you buy at least one device for iOS, Android and Windows Phone, it doesn't need to be brand new if you can’t afford, just head to analytics to see what is the most common used device among your customers. Another option is to kindly ask your fellow coworkers or your friends to borrow their phones from time to time. This applies not only for native apps but as well for responsive websites. And keep in mind that the browser of choice for the mobile devices isn't always the default one on those operating systems.
  5. Color blind – almost 10% of the population has some sort of color blindness. That means that the color pallet you put together to highlight different parts of your website may not be seen very well. It’s maybe hard to understand if you don’t see it for yourself. Here’s a simulator where you can upload a screen capture of your app and see what color blindness really means. You can change your app for the better by using contrast not only color for those important parts.

There are plenty more test that you can do for your app or website but these five cover the basic of a good product: it should look good, work well and load fast.

Presentation image from freepick

Oct 20, 2014

Automated backup for PostgreSQL cluster on Windows with PowerShell


PostgreSQL base backup tool (pg_basebackup) was introduced with v9.1 and is primary used for creating a ready to go standby replica. Since v9.3 pg_basebackup supports WAL segments streaming via the -X stream option. With -X stream on, pg_basebackup will open a second connection to the server and start streaming the transaction log in parallel with the cluster files. The resulting backup file contains all the needed data to start a fresh replica or to restore the main cluster to it's original state. Version 9.4, currently in beta, will further improve pg_basebackup by allowing you to relocate not only the data store but also any table spaces you might have.

If you are running Postgres on Windows and you are looking for an automated way of doing full cluster backups then Backup-Postgres PowerShell script is a good starting point.

The Backup-Postgres script does the following:
  • checks if there is enough free space to make a new backup based on the last backup size (works only with a local backup path)
  • purges expired backups based on the supplied expire date
  • creates a new folder for each backup inside the root backup directory (the root path can be defined as local or network share)
  • calls pb_basebackup tool to begin a tar gzip backup of every table space, along with all required WAL segments (via "--xlog" option)
  • writes any encountered errors to Windows Event Log
  • writes backup elapsed time to Windows Event Log
The script can be download from TechNet Gallery or from GitHub Gist.

Configure PostgreSQL server

I'm assuming you have PostgreSQL version 9.3.x or newer installed on Windows Server 2012 or newer.

In order to make a base backup of the entire cluster using pg_basebackup tool you'll have to configure your server for streaming replication. PostgreSQL base backup uses the replication protocol to make a binary copy of the database cluster files and WAL segments without interfering with other connected clients. This kind of backup enables point-in-time recovery and can be used as a starting point for a streaming replication standby server.

Enable streaming replication:

Open postgres.conf located (by default, on a 64-bit install) in C:\Program Files\PostgreSQL\9.3\data\ and make the following changes:

wal_level = hot_standby
max_wal_senders = 3  
wal_keep_segments = 10
You should adjust the wal_keep_segments based on the amount of changes your server receives while in backup. By default, each WAL segment has 16MB, if you expect to have more than 160MB of changes in the time it will take to make the backup, then increase it.

Create a dedicated backup user in Postgres:

Open psql located in C:\Program Files\PostgreSQL\9.3\bin\, login as postgres and run the following command:

CREATE USER pgbackup REPLICATION LOGIN ENCRYPTED PASSWORD 'pgbackup-pass';
Allow streaming replication connections from pgbackup on locahost:

Open pg_hab.conf located in C:\Program Files\PostgreSQL\9.3\data\ and make the following changes:

host    replication    pgbackup    ::1/128    md5

Configure Windows server

Create a local Windows user named postgres. It doesn't need to have administrator rights, but it should have full access to the backup folder.
Log off from Windows and log in as postgres, navigate toC:\Users\postgres\AppData\Roaming\ and create a folder named postgresql. Inside postgresql create a file named pgpass.conf with the following content:

localhost:5432:*:pgbackup:pgbackup-pass
The pg_basebackup tool will look for this file to fetch the password.
Open Backup-Postgres.ps1 and modify the following variables to match your configuration:

# path settings
$BackupRoot = 'C:\Database\Backup';
$BackupLabel = (Get-Date -Format 'yyyy-MM-dd_HHmmss');

# pg_basebackup settings
$PgBackupExe = 'C:\Program Files\PostgreSQL\9.3\bin\pg_basebackup.exe';
$PgUser = 'pgbackup';

# purge settings
$ExpireDate = (Get-Date).AddDays(-7);
Now it's time to schedule the backup, open Windows Task Scheduler and create a new task. Setup the task to run whatever the user is logged on or not with highest privileges, use the postgres user for this. Add a recursive trigger, I've set mine to repeat every day indefinitely. You should carefully chose the best time to start the backup and that's when the server is less used. You should specify in the settings tab the rule Do not start a new instance if the task is already running, this will prevent running multiple backups in parallel.
Go to the Actions tab and add a new action:

powershell -ExecutionPolicy Bypass -File "C:\Jobs\Backup-Postgres.ps1"

Restore cluster from base backup

In order to restore a base backup with multiple table spaces, you'll have to extract each table space archive to it's original path. Since Windows doesn't have native support for tar.gz you can use the 7zip command line.
With 7zip you can extract a tar.gz archive without storing the intermediate tar file, 7zip can write to stdout and read from stdin using the following command:

7z x "base.tar.gz" -so | 7z x -aoa -si -ttar -o "C:\Program Files\PostgreSQL\9.3\data"
Restore steps:

1) Stop Postgres server
2) Delete the data folder content and all table spaces content (if you have enough free space, you should make a backup copy of the current data and table spaces)
3) Run the 7zip command and extract each archive to its corresponding folder
4) Create a recovery.conf file in data folder with the following content, specifying the postgres password:

standby_mode = 'on'
primary_conninfo = 'host=localhost port=5432 user=postgres password=PG-PASS'
5) Open pg_hba.conf file and comment all existing rules, this will prevent external clients from accessing the server while in recovery.
6) Start Postgres server. When Postgres starts it will process all WAL files and once recovery is finished the recovery.conf file gets renamed to recovery.done.
7) Restore pg_hba.conf to its original state and restart Postgres.

After getting used to the restore process you could automate it with PowerShell.

Oct 16, 2014

PostgreSQL unattended install on Windows Server with PowerShell


I often find myself in the situation where I need to install and configure PostgreSQL on a new VM running Windows. Because repetitive tasks are annoying and error prone, I've decided to automate this process as much as I can using PowerShell.

The Install-PostgreSQL PowerShell module does the following:
  • creates a local windows user that PostgreSQL will use (called postgres by default)
  • the password use for the creation of this account will be the same as the one used for PostgreSQL's postgres superuser account
  • creates postgres user profile
  • downloads the PostgreSQL installer provided by EnterpriseDB
  • installs PostgreSQL unattended using the supplied parameters
  • sets the postgres windows user as the owner of any PostgreSQL files and folders
  • sets PostgreSQL windows service to run under the postgres local user
  • creates the pgpass.conf file in AppData
  • copies configuration files to data directory
  • opens the supplied port that PostgreSQL will use in the Windows Firewall
The script can be download from TechNet Gallery or from GitHub Gist.

Usage

On the machine you want to install PostgreSQL, download Install-Postgres.zip file and extract it to the PowerShell Modules directory, usually located under Documents\WindowsPowerShell.
Open PowerShell as Administrator and run Import-Module Install-Postgres. Before running the unattended install you should customize the PostgreSQL configuration files located in Install-Postgres\Config directory.
You can also add a recovery.conf file if you plan to use this PostgreSQL cluster as a standby slave. All conf files located in Install-Postgres\Config will be copied to the PostgreSQL data directory once the server is installed.

Install PostgreSQL with defaults:
Import-Module Install-Postgres
Install-Postgres -User "postgres" -Password "ChangeMe!"
Install PostgreSQL full example:
Install-Postgres  
-User "postgres"  
-Password "ChangeMe!"  
-InstallUrl "http://get.enterprisedb.com/postgresql/postgresql-9.3.5-1-windows-x64.exe"  
-InstallPath "C:\Program Files\PostgreSQL\9.3"  
-DataPath "C:\Program Files\PostgreSQL\9.3\data"  
-Locale "Romanian, Romania"  
-Port 5432  
-ServiceName "postgresql"

Oct 6, 2014

UX practices for newly born websites


You just launched your website and now you're eager to see if users actually like what you've done, but hold your horses and don't jump to conclusions very fast.  Let the user enjoy what you have created before throwing them a big modal screen with a survey. You'll lose them twice as fast as you gained them. 

I’m not saying that you shouldn't care but you should make smart choices when it comes to user feedback in the first month or so of your product. 

Here's what you can do:

  1. Analytics – the good old friend, it should be there from the beginning. Watch for everything: audience, technology, bounce rate, referrals, direct search (for this I would suggest using webmaster tools as well), I mean everything. 
  2. Video recording from site – the next best thing, you can record everything a user does on your site, what he clicks, how he scrolls and if he has difficulties finding stuff on the page or completing a form. It's easier to understand user behavior from this than from a heat map tool. I'm using Clicktale for this job, you can try it for free or you can find other similar products
  3. A/B testing – this is the ideal time, right at the beginning, before users get comfortable with the way your product looks. Throw some color on those buttons baby, see what pops! 
  4. Social media – even if you don't have a Twitter/Facebook page (really, why don’t you?) people are still going to talk about you, so make sure you search for your brand's name across multiple social media platforms

It's been a month or more? OK, take your surveys, don't forget to add some incentive (it's not the end of the world to give them some discounts), call people, interview clients and so on. Never be too aggressive, aggressive is obsolete!

Sep 23, 2014

jQuery Unobtrusive client side form validation for ASP.NET MVC



Here at VeriTech we are using a modified version of Microsoft's jQuery Unobtrusive Validation, a support library for jQuery and jQuery Validate Plugin which is shipped with ASP.NET MVC since it's third release. It enhances client side validation, by using unobtrusive data-* validation attributes instead of generated code, like in previous versions. Thus, it is easily expandable and doesn't pollute your code.

We also maintain the BForms open source framework, making it the perfect platform to implement and publish the improvements that we added to the way in which validation works. Here I am going to list some of those changes and also explain the reasons behind them.
Even if you use client side validation you must validate your data on the server as well, by checking the ModelState.IsValid property. Also you should call the ValidationMessageFor HTML helper responsible for displaying the validation error messages from your ModelState. Sadly, this implies that you are going to redraw the form, adding an unnecessary overhead. That's why we made some small improvements to the flow of validating a form. First of all, we are using a modified version of the showErrors method from the jQuery validation plugin. We added a second boolean parameter which denotes if the viewport should scroll to the first form error, improving the user experience.
Another thing that we added was an extension method for the ModelStateDictionary class, which gathers all the model errors in an easy to use way.
 public virtual BsJsonResult GetJsonErrorResult()
        {
            return new BsJsonResult(
                new Dictionary { { "Errors", this.ModelState.GetErrors() } },
                BsResponseStatus.ValidationError);
        }
By using all the methods above and an overload of jQuery's ajax method, which includes some new callback methods, including one for server validation errors, we managed to give the users a seamless experience, without needing to replace the whole form to render the errors.
$.bforms.ajax({
                url: ajaxUrl,
                data: submitData,                              
                validationError: function(response){
                    var validator = $('.validated-form').validate();
                    validator.showErrors(response.Errors, true);           
                }               
            });
On that note, one of the most asked questions that I heard on the subject of unobtrusive validation is how are you supposed to handle new HTML added to your page. I found that the best way is to remove the data values added to the form, named "validator" and "unobtrusiveValidation" and calling the jQuery.validator.unobtrusive.parse method on the new content.
var $form = $('.validated-form');

$form.removeData('validator');
$form.removeData('unobtrusiveValidation');

$.validator.unobtrusive.parse($form);
Another thing that you are probably going to need at some time is some custom validation rules. There are two steps that you must complete to be able to do that. First, you have to create a new validation attribute, which inherits the the ValidationAttribute class and the IClientValidatable interface. After that you should call the jQuery.validator.unobtrusive.adapters.add method, which has the following parameters:
  • adapterName : The name of the adapter to be added. This matches the name used in the data-val-nnnn HTML attribute (where nnnn is the adapter name)
  • params:  An array of parameter names (strings) that will be extracted from the data-val-nnnn-mmmm HTML attributes (where nnnn is the adapter name, and mmmm is the parameter name)
  • fn : The function to call, which adapts the values from the HTML attributes into jQuery Validate rules and/or messages.
Don't forget to parse your form again to be sure that the new validation rules are added.
As you can see, you can manually add the data-val-nnnnn HTML attributes to your form elements without using a validation attribute, but then you are not going to be able to use the server side validation methods.

This is the way in which you can implement a custom rule for mandatory checkboxes (for example a Terms and Conditions agreement) :
//server side code using validation attributes
public class BsMandatoryAttribute : ValidationAttribute, IClientValidatable
{
    /// 
    /// Returns true if the object is of type bool and it's set to true
    /// 
    public override bool IsValid(object value)
    {
        //handle only bool? and bool

        if (value.GetType() == typeof(Nullable))
        {
            return ((bool?)value).Value;
        }

        if (value.GetType() == typeof(bool))
        {
            return (bool)value;
        }

        throw new ArgumentException("The object must be of type bool or nullable bool", "value");
    }

    public IEnumerable GetClientValidationRules(ModelMetadata metadata,
        ControllerContext context)
    {
        yield return new ModelClientValidationRule
        {
            ErrorMessage = this.ErrorMessage,
            ValidationType = "mandatory"
        };
    }
}
}
//client side validation methods
jQuery.validator.addMethod('mandatory', function (value, elem) {
        var $elem = $(elem);
        if ($elem.prop('type') == 'checkbox') {
            if (!$elem.prop('checked')) {
                return false;
            }
        }
        return true;
    });

jQuery.validator.unobtrusive.adapters.addBool('mandatory');

Another improvement that we added to the validator is the way in which validation errors are displayed. By default a label is added after your input which contains the raw error message. We replaced the label with a span containing a Bootstrap Warning Glyphicon. We also used the Bootstrap Tooltip plugin, so that you can see the validation by hovering over the exclamation mark, as you can see below:

For a live demo of a form validated using our modified version of Microsoft’s Unobtrusive validation you can check the  BForms Login Demo Page

Sep 5, 2014

Architecting an automatic updates system for windows services



We are working on a control and monitoring system for on-premises & cloud hosted servers and .NET applications. The monitoring is done by installing a windows service on all the virtual machines that are part of the system. The windows service’s job is to run (and keep open) a console app that collects data from the local machine and sends it to the monitor web service. After deploying the windows service to a dozen machines we realized that updating the collector component involves a lot of time and effort. So we've decided that we need an automated process that will detect when a new version of the collector component has been committed on Git, package the new version and apply it on all the servers.

Our automatic updates system is composed of a TeamCity server and a WebAPI service that collects the data, hosts the update packages and acts as a proxy between the TeamCity server and our windows services. We don’t want direct communication between TeamCity and the windows services since it would scale poorly (we can have multiple WebAPI services to distribute packages when we need to scale) and direct communication between the windows service host and TeamCity might not be wanted/possible.

TeamCity will detect changes in the Git repository of the collector component, pull the changes locally, build the collector project, increment the assembly version if the automated tests have passed, use MsBuild to build the project and archive the artifacts in a zip package. The package, containing the current version number in its name, gets deployed to the WebAPI service that stores it in its local storage.

Getting the update package from TeamCity to the update service can be done in several ways. We can configure TeamCity to deploy the WebAPI service along with the ZIP package via IIS Web deploy, but that would mean that every time a new update shows up the WebAPI service will have to be re-deployed and restarted. Another way to achieve this is to make the WebAPI service check the TeamCity API when the windows service asks it for updates. This is probably the best approach (taking development effort into account), and the downsides can be overcome by making the WebAPI service cache the results for a certain amount of time and only download a new update on the first request. Another approach would be to expose the updates folder as a network drive and XCopy the packages to it, but this requires setting up the TeamCity account to have write access to that folder.

There are several viable options for distributing the update package from the WebAPI service to the various windows services. The WebAPI can either push notifications to the windows service (using SignalR or Redis pub/sub), or the windows services can check for updates with the WebAPI service at some set interval. We’ve chosen to have the windows service poll the WebAPI for updates since it’s easier to implement and it’s not critical for us to update the collector component exactly at the moment a new version is available. If, however, we’ll need to make sure the collector is always up to date, the WebAPI service (which not only acts as an update agent, but also collects all the data sent from the collector) can check the collector’s version on each call it receives, and if a new one is present, tell the collector to signal the windows service that an update is needed.
The window service, once it downloads an update, waits for the collector console to finish the running job and shuts it down, unzips the package and overwrites all the files it contains, then starts the new version of the collector.

Since we’re distributing executables to many mission-critical machines, security is a big issue. Assuming the TeamCity server is secure from external attacks, the problem then lies with the updater service. If a WebAPI update service is compromised, then any machines requesting updates from it will be too. In order to mitigate this, the executables delivered by TeamCity need to be signed with a certificate that enables strong encryption. The windows service will then have to verify that the collector executable is signed before launching it.


We're interested to hear your thoughts on our design, improvements are always welcome. 

Sep 2, 2014

RequireJS.NET - open source project by VeriTech



Open source is not just for hobbyists, it is used successfully in some of the largest corporations on the planet. Most of the devices we use daily are in one way or another powered by open source. Since we also use open source software to great effect, it is our desire to give back to the community.

RequireJS.NET version 2.0 brings asynchronous JavaScript loading and flexible js bundling configuration to ASP.NET MVC. You can now have all the benefits of RequireJS and more in a way that is easy to use and fully integrated with .NET:
  • dependencies declaration and module path configuration using JSON or XML 
  • JavaScript file structure integrated with the MVC structure (js code-behind for each Razor view) 
  • bundling and minification with RequireJS.NET Compressor and MsBuild 
  • internationalization support for JavaScript resources with ResxToJs
Get started with RequireJS.NET: introductionsetup tutorial, bundling and compression, i18n support.

If you have suggestion or any kind of feedback regarding this project please submit an issue on GitHub.

Aug 1, 2014

Stealth testing



Analytics it’s all nonsense, you are tired to watch recordings of your user actions, A/B testing is working to slow, so one thing remains: direct user testing.

You gather a group, you write your test case scenario and you come up with some pretty decent questions for them. Yet you are afraid they will be influenced by you. And you’re right, even if they don’t realize it, knowing they’re being watched makes them act differently with your product. So what can you do?

I like to go in stealth mode, I like to watch them when they feel most comfortable and not knowing the test has already begun.

Let’s see how I do it:
  1. Meet with the subjects one by one, tell them that you want to let them know what will happen during the testing day 
  2. When the meeting starts say to them that you have to make a quick phone call, something urgent 
  3. Ask them to help you, for you to make the call you have to obtain some information from your website, and you have your hands full so you can’t do it yourself (it’s quite useful to give them a tablet or smartphone on which they can search what you need) 
  4. Start talking on the phone with your imaginary friend. It’s time to pay attention on what the users are doing with the task you given them, while you keep talking. It’s ok to stand up and move around, place yourself behind them to have a better look, and it’s ok to put pressure on them, asking from time to time if they have found the information you need (in real life your users will use the site under pressure from time to time) 
  5. End the phone conversation after you have your answer from them and explain your test subjects what will happened for the rest of the test 
By the end of the day you’ll have concluded two test case scenarios with the same users. Not only you can conclude how they acted differently when someone was watching them and when not but also you’ll see how they behaved when they saw your product for the second time.

This was just one example of stealth testing, the process can become more simplified or more elaborate, it can be transformed in a game, and it can be anything you want as long as your test subjects don’t know they’re being watched.

Jun 26, 2014

I do want a better product but I can’t afford to hire a UX guy



First of all, you should think of this as an investment, not an expense - like that time that you went to the gym, you didn't think of it as losing money, you thought of it as investing in your looks.

Second of all you don't really have to hire a full-time UX expert. If you have a small business, you could go with a consulting firm to set things on the right track for you.

But picture this: you have 10 programmers working on your product, they constantly add new features that really kick ass in the industry but nobody is using them. Maybe because they are hard to use, maybe the controls aren't in the right places, maybe the copy isn't great and you don't really know how to ask your potential customers what's wrong. It may be hard to realize it but you need a fix.

The idea is you could have 9 programmers and a UX person doing a better job. Your product kicks ass only if your users say so!

Save yourself!


Jun 21, 2014

Becoming a better JavaScript developer



One of the most important aspects in the life of a coder is the constant strive to learn, learn and learn some more. The industry is evolving and you have to keep up or risk being left behind. This is especially true in the world of JavaScript, which was considered more suitable for rookies and now has evolved to a full fledged
programming language, with a large ecosystem of libraries and frameworks.

Therefore, I am going to present three practices that will help your code evolve as well.

Start using a script loader and better organize your code

By using a script loader you can get rid of all those pesky script tags from the bottom of your <body> or even worse, your <head>. It also provides an easy way to load your scripts asynchronously and handle dependencies, without worrying about the order in which you should declare them.

Another plus for using a script loader is that it helps you write modular and reusable code, by creating and exporting modules.
My script loader of choice is RequireJS, with a well-documented api. Props to its creator, James Burke.

Use a custom jQuery build

What is the first script that you include in a newly created project? Probably your answer is jQuery. I get it, jQuery was the library for the millennium, bringing all the different browser implementations under one nice fa├žade. But nowadays, it became much more than just a simple utility for traversing the DOM, possibly adding unwanted code to your project.
For example, as it's stated on jQuery's github page, there's no need to include all the DOM manipulation functions if you only want to make a jsonp request, using $.ajax.
There's a full list of excludable modules on that page, so feel free to take a look.

Another mention that I want to make on this topic is James Padolsey's jQuery source viewer, an excellent tool for visualising the way in which methods are implemented in jQuery.

Embrace weak typing and learn how to dodge JavaScript's problems

Maybe you miss working in a strongly typed language, or maybe you like being sure that 1 isn't equal to true, or you have no idea why 1/0 in base 19 is equal to 18. But if you want to succeed as a JavaScript programmer, learning when and why "strange" things happen is essential. Considering the multitude of posts on the web regarding this subject, I'm not going to reiterate over those nuisances here, but I'll just leave this useful link to a JavaScript equality table.

That's all for today's post, look ahead for my next entry, containing some tips and tricks for using Unobtrusive Client Validation in ASP.NET MVC

Jun 20, 2014

Care about yourself to start caring about your users



We love to think that someday we'll have a product that sells itself right from the launch, that people will love it right from the start and that they're going to pay a lot of money to have it even if they don't need it. That might be possible if you sell real unicorns but the truth is that you don't (If you do, please send me an email, I must have one).

Don't be scared though, you can still sell your products and have a lot of fans by doing the right thing. And the right things starts with you!

Yes you, you are the one not only investing in the product but the one who is also waiting to make a profit out of it. So you are in control right? You may be deciding to cut on the quality using cheaper materials but not on the final price.

WRONG! Start caring about yourself!

Think how important it is for you when you buy something and you have expectations about quality, utility, aesthetics and so on. Your clients are no different from you. If you buy something that broke after the first use you become mad, so do your customers.

Respect yourself in everything you do or step aside and let others, who care more, run your business. This is the only way to sell unicorns :)

Easy to say, hard to do, so what can you do after all if you have an online business?
  • Invest in good hosting or, if you can afford it, move to the cloud. Your service must be available all the time and at it needs to be fast. A client from the US may wait seconds more than a client from UK to see the same thing, and that is too much.
  • Choose the right platform and technology. Choosing poorly may cost you more in the long term, think about support or not being able to find guys to fix your site.
  • Make sure that everything works well, not that it simply works. It will take longer to develop a new feature but in the end everyone will be happier about it.
  • Get rid of advertisement on your site. Not only will you not make money from that but users hate to see tons of flashing images and banners. It looks cheap. Bonus: without them, the site will load faster.

Jun 18, 2014

Why do I need a UX Designer? I just want my site to look good




To be able to answer such a question we must be able to understand what User Experience really is. I'm not going to use a hard to understand definition because I have a little example in mind.

Let's imagine that you want a new car, one built just for you. What steps will you follow?
  • You go to the Web Designer and tell him what you want: fast, powerful, safe. He draws you the car, but that doesn't mean it will actually work. 
  • A second person comes by, the UI Designer, and he makes the car look even better. He chooses a nice set of wheels, he picks out that chrome finish for the doors that matches the wheels but the car still doesn't work. 
  • It's the UX Designer task to understand what is wrong with the car. He finds out that the steering wheel was put in the wrong place, behind the driver's seat. He understands what the user actually needs and not only does he fix the car but he creates a nice experience when driving or riding in it. 

The major attribute of a UX Designer is that he understands the client’s needs. He knows that it's not enough for your site or a software program to look good, it must also be easy to use. He's the one that can create something people will love, understand and grow with.

But a UX Designer's job is not only about creating nice things for the users, it's a full time job where business goals, design trends, psychology and many more meet to provide customer satisfaction and maximize your revenue.

UX is invisible marketing with feelings!