What’s wrong with MCB Lite card design?

MCB Lite

Note: I am not a UI/UX expert. I am just sharing my feelings about this design as a consumer and my little bit experience with design.

MCB Bank (One of Pakistan’s largest banks) recently introduced its branchless banking product “MCB Lite”. Somehow, as a consumer, I am not satisfied with the design of the card and I am going to share my thoughts about the card design.

The designer tried to give a feel of a smart phone to the card but somehow missed some very basic design principles. Smart phones, especially iPhone, have set very high standards of design and if someone is trying to design something which looks like a smart phone, they’d have to be extra careful. I don’t want to sound harsh but it looks like the card was designed by someone new to design. Printing quality is even worse.

MCB Lite

I tried to find out what’s wrong with the design and here are my findings:

And the list goes on….

MCB, I am disappointed by the quality of design (and printing) from a bank like you.

Why I no longer rely on online news [A confession]

It was a Saturday morning of November 2012 when I started observing tweets about Google Pakistan and Microsoft Pakistan websites getting hacked. I immediately checked both websites and they were really showing a message from some Turkish hacker. I did nslookup and nameservers were changed to some free hosting service provider. Obviously, Google and Microsoft were not hosting their websites on a free webhost. Actually they were not the only ones who were hacked, it was PKNIC. I quickly did a reverse whois, randomly checked a few of them. All of them were showing the same page. There were 284 domains pointing to those specific nameservers. What? 284 domains hacked and people are talking about just 2 domains. This must be a mega news. I quickly tweeted this:

The tweet went viral and picked up by many news agencies and blogs. There are still many tweets in twitter search results:

Many referred me and many presented it without mentioning the reference pretending it as their own news.

Here are some of them:

And some blogs & news sites in other languages which I don’t understand:

Not only this, the 284 figure was also published by print media. Here is a news item from The News Pakistan (By Pakistan’s largest newspaper group):

The News

So, as you can see that each and every news site and blog was after the news and everyone was publishing it in his own words. What went wrong here? Did anyone ask any of these blogs or news site for a list of 284 domains hacked? Did they publish such a list?

The confession part

I tweeted and went for my breakfast. After having the breakfast I decided to publish the list of these hacked domains. As I started reviewing the hacked domains list, I noticed that I made a big mistake while counting hacked domains. There were 2 name servers pointing to that specific free hosting provider and I counted all the domains pointing to any of those 2 name servers. So actually, there were just 142 domains each one counted twice. Now I was extra careful before publishing anything. I checked the name server change history of all of those domains and noticed that only 110 were changed in last 24 hours. What about rest of the 32 domains pointing to that specific name server? All of them were showing real websites hosted by that free hosting provider and they were not hacked. I verified twice and published the list here. My blog was getting a huge traffic spike at that time. A lot of news sites and blogs picked up the list immediately and updated their news articles. This is how the online news world works. They pick up the news items from whatever source they can get it and publish it immediately without verifying anyhing.

Exceptional Performance without mod_pagespeed or apache (Page Speed score:99)

At last I have managed to get Google Page Speed Score of 99 and YSlow score of 97 for this blog. As mentioned earlier, this blog is generated using Pelican and deployed on heroku Cedar Stack which supports Python applications. It is served from great wsgi app called ‘static‘, gunicorn and gevent. I had to make a lot of changes in static to make it possible.

Gzip Compression

As we are serving static content, there is no need to compress the content with each and every request. We can have gzipped content generated along with the other static content and serve it when requested. This approach, in my opinion, is faster than on-the-fly gzip compression used by nginx and apache. We can save CPU time used to compress the content with each request. I used gzip_cache plugin to generate the gzipped version of all my content. Next step was to serve this static content when requested. Static does not support this by default. I had to modify it a little bit. It tries to find the gzipped copy of the content, if gzipped content request is received.

Leverage browser caching

This is purely handled by the HTTP Server serving the content. Again I had to make a few changes in static to enable caching. I tried to keep the syntax similar to Apache’s ExpiresByType. Expire time can be specified in seconds against each mime type.

Specify a character set

Again this is purely handled by the HTTP Server and I had to make a few changes in static to make it possible. Just like Expires headers, I tried to keep the syntax similar to apache’s AddCharset. Charset can be set for filename patterns.

Minify resources & Combine external resources

Using assets plugin to combine and minify resources which further uses web assets. This is done offline, so no minification & combining overhead here.

Optimize images

Lossless compression of images was done using jpegtran and optipng. This task was automated by writing a pelican plugin. Again, done offline, so no CPU needed to serve optimized images.

Remove unused CSS

This blog template was designed using twitter bootstrap and lots of custom css. Even after combining and minification, the size was 130KB. I used mincss to find unused css and remove it. Now the CSS is just 14KB (4KB gzipped). I had to re-add some styles which were used on other pages. Once again, done offline and at design time only.

What’s still missing?

Specify image dimensions

Being responsive design, it is not possible to send all images with image dimensions specified. The images resize themselves according to the screen size. Although, we could use some javascript to determine screen size and resize images accordingly, but this would have its own overheads.

Leverage browser caching for external resources

This blog uses only one external resource ga.js, which is the javascript file used by Google Analytics. It comes with Expires headers of 12 hours. There has been a lot of discussion about caching and serving it from one’s own servers but I guess anything like this would be overkill. ga.js is so common, that it is probably downloaded by some other website already.

Using CDN for static content

This task is in my todo and I am still looking for a good (preferably free) CDN.

Twitter users in Pakistan [Part-II]

This blog post is continuation of Part-I. The sample data is increased to 150K Pakistani tweeps now.

Followers

Follower count is no longer a good influence measure. On average each Pakistani tweep gets followed by 129 users. Majority of Pakistanis (about 3/4th) have less than 50 followers. Half of Pakistani twitter users have less than 10 followers. There are about 10,000 tweeps with no follower and about 12,000 tweeps with single follower. This is a very strange trend. If you look deeply into these accounts, you’ll notice that most of them are with default DP and default background. It seems like these are fake accounts, created by social media cells of different political parties to increase follower count of their leaders on twitter.

On the other side, there are just 24 Pakistani’s with more than 50,000 followers. Most of them are politicians and TV anchors. Just 2331 tweeps have more than 1000 followers.

Followers of Pakistani twitter users

Klout

Klout is more reliable social media influence measure. Out of 150,000 Pakistani tweeps about 40,000 do not have any klout score. About 70,000 have their klout between 11-20. Average klout score is 16.72. About 12,000 have the minimum possible score 10.

Only 22 users have scored above 70 score.

Klout of Pakistani twitter users

Here is the list of most influential Pakistanis (klout: 70+)

Note: This score may have changed when you’re reading this article.

Most commonly used words in Pakistani tweeps profiles

For this analysis, description of about 150,000 Pakistani tweeps was used. Out of 150K only about 77K (about 51%) users have set description field in their twitter profiles.

Excluding punctuations and stopwords, following is the list of most commonly used words by Pakistani tweeps in their profiles.

  1. love
  2. pakistan
  3. student
  4. follow
  5. like
  6. life
  7. pakistani
  8. engineer
  9. cricket
  10. social

Technology used was FreqDist and stopwords of nltk.

Moving my blog from posterous to Pelican

Background

I do not want to start this blog post by bashing Posterous. Posterous is a great blogging tool for quickly making blog posts. A couple of years back, some of its unique features convinced me to move my blog from wordpress to posterous. Posterous offered custom domain name for free whereas wordpress was charging for it. I really liked the email to blog post feature, although I never used it other than testing it a couple of times. Another amazing feature of posterous was detecting and making beautiful widgets for external objects like YouTube, github gist etc.

Posterous provides some nice templates but I wanted to have more control over presentation. A few days back youtube was blocked in Pakistan. Some misconfiguration caused problem in loading other google sites. This affected the sites using resources from google like javascripts for google analytics, google maps etc. Same thing happened with my site. The template, I was using was consuming some resources from google. I don’t know why but it was there and there was no way to remove it. So, the end result was a slowly loading page for Pakistani audience.

Another problem was how posterous modifies the HTML of the blog post. Again, I wanted to have more control on my blog post presentation. Inserting a table in a blog post was a trivial task. The WYSIWYG editor cannot handle table, even if it is copy pasted. I had to manually draft HTML and paste it in HTML part of the WYSIWYG editor. And it gets modified when rendered :-(

Using SSGs

The idea of SSG is amazing. Why do I need a dynamic setup for content which is hardly going to be modified in a month. I tried Jekyll & Pelican and decided to use Pelican. Why Pelican? It was because of my biasedness towards Python. Jekyll is an equally good or may be better SSG.

Being a geek, I like writing in plain text editors more than WYSIWYG editors. Writing in Markdown and reStructuredText is fun. One can keep his energies focused on writing rather than formatting the content. My content is saved as content not as HTML markup. It has better revision mangement using git or any other version control system. This can easily be imported to any other application. The content is saved in files, not in DB. I can write offline and publish when I am online.

I have full control over the page rendered. I can design and optimize it as I want. I do not have to worry about security or scaling as all the content is purely static.

User Experience & Minimalism

I am not a UX expert but I do not want a lot of distractions in my content. Here is what I did to improve UX:

Migration

Jekyll provides a posterous importer but Pelican does not. Currently pelican provides only following imports:

For posterous I had to write my own importer which consumes Posterous API. Here is the code:

def posterous2fields(api_token, email, password):
    """Imports posterous posts"""
    import base64
    from datetime import datetime, timedelta
    import simplejson as json
    import urllib2

    def get_posterous_posts(api_token, email, password, page = 1):
        base64string = base64.encodestring('%s:%s' % (email, password)).replace('\n', '')
        url = "http://posterous.com/api/v2/users/me/sites/primary/posts?api_token=%s&page=%d" % (api_token, page)
        request = urllib2.Request(url)
        request.add_header("Authorization", "Basic %s" % base64string)
        handle = urllib2.urlopen(request)
        posts = json.loads(handle.read())
        return posts

    page = 1
    posts = get_posterous_posts(api_token, email, password, page)
    while len(posts) > 0:
        posts = get_posterous_posts(api_token, email, password, page)
        page += 1

        for post in posts:
            slug = post.get('slug')
            if not slug:
                slug = slugify(post.get('title'))
            tags = [tag.get('name') for tag in post.get('tags')]
            raw_date = post.get('display_date')
            date_object = datetime.strptime(raw_date[:-6], "%Y/%m/%d %H:%M:%S")
            offset = int(raw_date[-5:])
            delta = timedelta(hours = offset / 100)
            date_object -= delta
            date = date_object.strftime("%Y-%m-%d %H:%M")

            yield (post.get('title'), post.get('body_cleaned'), slug, date, 
                post.get('user').get('display_name'), [], tags, "html")

The above code produced pelican fields which can later be passed to fields2pelican which uses pandoc to tranform html content to markdown or reStructuredText.

Deployment

The site is deployed on heroku Cedar Stack which supports Pyhton applications. It is served from great wsgi app called ‘static‘, gunicorn and gevent.

Update: Using my own fork of static for performance tweaks.