Home → Blog
From our Design blog

How we built the Horlicks WizKids website

Horlicks WizKids is South Asia's largest interschool fiesta with over 200,000 children...

HTML and CSS tricks for good website design

Even the most experienced and best CSS/HTML designer out there does not have the vast myriad sets...

HTML 5: The future of rich-media web

In the ever-changing world of the web, its surprising to see how long HTML 4.x has held on to its...

From our Apps blog

Calculate your Amazon AWS Hosting Costs using Excel

One of the most common questions that come up whenever we recommend Amazon AWS as a hosting...

Drupal in the Amazon AWS Cloud

Recently, we worked on and delivered a user voting web application, hosted on the Amazon Web...

Apache (httpd) and lighttpd on an Amazon AWS Basic AMI

Over the last couple of days we did some intensive work on comparing execution of a complex...

Syndicate content

dhwanit

Posted on Apr 13 2012 by Dhwanit Category: Apps
0 comments

Calculate your Amazon AWS Hosting Costs using Excel

One of the most common questions that come up whenever we recommend Amazon AWS as a hosting platform is "how much would it cost me per month?" We would refer clients to the official AWS hosting calculator and they would invariably come back and say they didn't understand how to use the tool. Well, they're not alone... I have a confession to make -- I've played a bit with the tool myself and I still haven't figured out how to use it!

So, we did the next best thing... make a simplified version of the AWS calculator, which may not be absolutely accurate to the penny, but gives a ball-park range of how much it would cost per month. Based on previous projects that clients paid Amazon for & our own experience of paying Amazon for hosting, we can say that this simplifed spreadsheet is about 90-95% accurate in projecting costs for the first year.

Download the AWS Hosting Cost Calculator in Excel format. The AWS hosting prices were updated last on April 13, 2012. The prices may have changed in the mean-time. Cross-check the EC2 prices and the RDS prices before you use the Calculator.

Download Now!

Calculator: simple usage

  1. Open up the downloaded Excel sheet. The first column (A) lists the standard instance types. We have not included the other high-performance/high-memory instances for the sake of simplicity. Also, there's a good chance you may not need such specific instance types during the first year of hosting.
  2. Cells that are green in color are to be filled in by the user.
    E10 - E33: Number of On-demand instances that are required.
    M10 - M33: Number of Instances required that have been Reserved for 1 year.
    V10 - V33: Number of Instances required that have been Reserved for 3 years.
    E38: Expected amount of data (in GB per month) to be transmitted through the Elastic Load Balancer (ELB). If you're not too sure, enter 50 (GB per month). Data transfer cost through the ELB aren't very expensive.
    F39: Number of alarms configured for triggering auto-scaling. Usually, its 2 -- a scale up and a scale down. But for fine grained scaling, there may be more alarms defined. Enter 2 to use the default.
  3. As you're making changes, the price per month & year appear in cells IJ2, IJ3 and IJ4 (merged cells for columns I & J on rows 2, 3 and 4). These are the cells with a yellow background and a thick black border. This is your monthly, amortized and yearly cost.

Calculator: a bit more advanced usage

  1. If you're only making changes in the On-Demand section (E10 - E33), cells IJ2 and IJ3 will be identical (there is no reserved pricing paid). However, if you make changes in the Reservation for 1 year section (M10 - M33) or the Reservation for 3 years section (V10 - V33), the costs in cells IJ2 and IJ3 will be different. This is because when reserving instances, you pay an upfront fee to Amazon before you start paying for hosting every month. Without the reserve price included, your actual monthly outflow will be the amount in cell IJ2. However, the reservation price is amortized across 12 months resulting in the effective monthly outflow amount in cell IJ3.
  2. If you want to compare On-demand versus 1 year Reservation versus 3 year Reservation costs, first enter the relevant numbers in all three columns E, M and V. Ignore cells IJ2, IJ3 and IJ4 since this is the total cost across all three types (on-demand, reserved 1 year, reserved 3 years). Instead, scroll down to Row #41 onwards.
  3. Cells B48, C48 show the per month and per year cost of only the On-demand instances chosen. Cells B56, C56 show the per month and per year cost of only the 1 year Reserved instances; B57 is the effective cost per month when the reserve price is amortized across 12 months. Cells B65, C65 show the per month and per year cost of only the 3 year Reserved instances; B66 is the effective cost per month to amortize the 3 year reserve price across 12 months.

End-note

We use this Excel sheet all the time to set client expectations of hosting costs under AWS. We just thought we'd share it with others who might also have such a need.

Webrelational Media LLP has extensive experience in developing feature-rich, highly optimized, scalable and secure web applications. A lot of our websites are hosted on the Amazon AWS cloud. If you are looking for a web development partner to develop and host your next web project in the Amazon AWS cloud, send us an enquiry. We'll be happy to take it from there.

dhwanit

Posted on Sep 15 2011 by Dhwanit Category: Apps
0 comments

Drupal in the Amazon AWS Cloud

Recently, we worked on and delivered a user voting web application, hosted on the Amazon Web Services (AWS) infrastructure, that scales up or down on demand, and in the process discovered and learned a lot many things about our choice of CMS, and the joy of discovering and developing for the cloud!

 

Drupal is our CMS of choice. All the websites we’ve developed over the last couple of years have been made using Drupal. While Drupal provides a robust base for developing advanced web applications with extended custom functionality, it still does rely on the LAMP (Linux, Apache, MySQL, PHP) stack and requires a locally available file system to store user-uploaded files. Drupal, out-of-the-box, is inherently cloud averse.

We had to find a way of making Drupal cloud-friendly since the web application’s fundamental requirement was to be running on scalable Amazon Web Services.

Read more about how we did it:

Application architecture for the Cloud

There were certain assumptions already in place before we started work on architecting the application:

  • The application will use Amazon Web Services: We would be hosting the Drupal database on Amazon RDS; the web servers will run on one or more Amazon EC2 instances.
  • The website will rely on Amazon Elastic Load Balancer (ELB) to drive traffic to the individual instances based on their health (processing capability or CPU utilization).
  • All compute intense processing will be on a separate EC2 server to be run asynchronously of the web application. Inter-server messaging will be done through Amazon SQS.
  • The load balancer will auto scale-up or scale-down EC2 instances based on triggered alarms programmed into the auto-scaling group.

With these assumptions, work on the application began. However, it was only until mid-way through the project that the first major bottleneck was discovered: Where would we store the files? One of the assumptions we had, that isn’t listed above, was that one master Amazon Elastic Block Store (EBS) disk would be available across all instances… sort of like a Network Attached Storage (NAS) that is mounted on all running EC2 instances with read/write capability. Boy, were we wrong on that one!

It soon became clear that a “locally available” file system for Drupal to manage files associated with the website wasn’t going to make the cut since EBS volumes could only be mounted on a single EC2 instance. Enter Amazon Simple Storage Service, or Amazon S3 to the rescue!

Storage architecture for the Cloud

Drupal’s behavior is to receive files uploaded by a website user and usually store it in the default location of /sites/default/files. This is a directory on the web server’s hard disk, which Drupal has read/write permissions to.

This behavior had to change such that Drupal never stores files on the local hard disk (or mounted file system). In addition to the primary problem of files uploaded by users that needed to be off the local server, a secondary issue was files generated by Drupal in the due course of normal website functioning also had to be off the local server.

Local versus S3 Storage

These were imagefield thumb files that Drupal generated when users uploaded images; resized imagecache files of certain dimensions based on website look and feel that were generated the first time these imagecaches were fetched by a website visitor and aggregation of CSS and JavaScript files generated for optimization.

The solution was to modify both Drupal core and the various contributed modules to work in the cloud environment:

  • Aggregation: We modified the Drupal internal routines to generate aggregated CSS and JavaScript files to store the aggregated files directly into the static S3 bucket instead of the local file system.
  • Imagefield thumb: We modified the imagefield thumbnail creation routines so that that the created imagefield thumb is moved into the public S3 bucket from the local file system.
  • Imagecache: Image caches are generated when a user tries to fetch a non-existent image from the supplied caching URL on the web page (/sites/default/files/imagecache/<cache_name>). We modified the imagecache module so that the moment it generates the imagecache, our code moved the imagecached file into the public S3 bucket. This works like a charm because the current request returns the images from the local file system in its response, but future requests will directly be to S3 instead of an EC2 instance behind the load balancer.

For all this to work, a separate table was created in the database that maintained a list of Drupal files, which had an associated file in the application’s various S3 buckets. Overriding theme routines that pushed out an S3 URL on a web page were also developed. These routines checked the new table and pushed out S3 URLs for file already created and stored in S3, or returned Drupal default URLs if they weren’t found in S3.

Compute intensive media processing

Some of the files that were uploaded by website users weren’t just images. The major component of the website was to allow music artists to upload original songs as MP3 files. The website then had to process this media to generate several files of varying durations and quality. Since media processing is compute intensive and time consuming, it had to be done asynchronously and not as part of web requests coming in:

  • An artist would upload an MP3 file, which would straightaway be stored on S3.
  • Once consent for publishing has been provided, the web application would send out a request to a separate media-processing EC2 instance via Amazon SQS.
  • If the media-processing EC2 instance wasn’t running, the application would “wake it up.”
  • The media-processing EC2 instance reads from this pre-processing SQS queue and performs all the necessary compute and memory intensive tasks of adding effects or resizing the track etc.,
  • On success (or failure), the media-processing EC2 instance would write out an appropriate message to a post-processing SQS queue.
  • The web application would periodically (on cron run) check the post-processing SQS queue and appropriately update the database with the information returned from the media-processing EC2 instance.
  • When the media-processing EC2 instance didn’t have any pre-processing messages within a certain period of time, it would shut itself down.

Yes, there were error checks involved and we tried all operations multiple times before the application or the media-processing instance altogether gave up.

Load Balancing and Auto Scaling

The final piece of the jigsaw was to ensure that as demand went up, the system scaled up horizontally and when the system detected that CPU utilization was low, the system scaled down by shutting off unused or little used instances automatically.

Amazon AWS Cloud Based Application Architecture

Amazon makes it very simple to achieve all this by using a load balancer that routes user traffic to one or more application EC2 instances. To determine the actual instance to route traffic to, the load balancer analyzes received health check metrics generated by all EC2 instances behind the load balancer.

We created a launch configuration with a custom built AMI for this web application. A script then creates an auto-scaling group, programs the scaling policy and sets up the trigger alarms to execute the scaling policies. This is where a dedicated systems administrator can be very helpful to monitor traffic and make necessary changes to the scaling policies.

Optimization and Testing

Once the beta site was up, a combination of ApacheBench and Blitz.io testing was run. ApacheBench generated simultaneous access by 100 signed-in users with good results. In parallel as the ApacheBench signed-in user test was going on, multiple Blitz.io rushes were performed.

In order to boost the results, a separate EC2 instance running the MemCache daemon was also set up. This not only had the effect of reducing the loads on the RDS instance, but since a lot of data returned back to the visitor was from memory, the performance was much faster. It turned out that a micro EC2 instance with MemCache running was sufficient for processing close to 6 million requests a day! CPU utilization for the micro instance never went up beyond 20% even at full load (while network I/O showed a high level of activity transferring hundreds of megabytes of data directly out of the memory cache)

In the end, the web site loaded and returned pages back to the visitor in just under one second from anywhere in the world.

Blitz.io Rush results

End notes

While there are a lot many more optimizations and additions that can be done, costing definitely was a factor in determining what gets left out of the initial deployment.

The Amazon CloudFront content distribution network (CDN) was given a pass since S3 in itself was very fast (as evidenced by the Blitz.io testing from different locations worldwide). Similarly, an RDS read-replica could have been instantiated in a different availability zone to make the website data more robust, but in the end daily backups on the master RDS instance were sufficient with its processing power never exceeding 50% during the testing.

All-in-all this was a great website to build and we were thrilled that we could use Drupal and modify it to suit the cloud!

dhwanit

Posted on Feb 8 2011 by Dhwanit Category: Apps
0 comments

Apache (httpd) and lighttpd on an Amazon AWS Basic AMI

Over the last couple of days we did some intensive work on comparing execution of a complex business process management web application, developed using Drupal and CakePHP, by alternating its run on Apache first and then on lighttpd. The need of the hour was optimization without touching the application software itself (which will come up later).

After having gone through a whole bunch of online resources and comparisons, it seemed justified for us to host our application on lighttpd as the way to move forward. According to all the expert views, lighttpd is fast, can handle thousands of concurrent requests without barfing or occupying huge amounts of memory and it doesn't use processes per connection thus relieving the OS to do more important things, like fetching network traffic or managing disks.

Most comparisons we came across were done on localhost or over the local LAN with crazy throughputs and mind boggling number of requests per second handled, which were impressive to look at, but definitely did not depict a real-world situation that we'd be dealing with once our application goes live. So, we had to come up with our own numbers with the software running on a server on the Internet being accessed over slow broadband connections in India (which defines 90% of our target market).

Server configuration

We've been test driving Amazon Web Services over the past few months (ever since they came up with the Amazon AWS free usage tier) on a 32-bit micro-instance running a basic Amazon Machine Instance (AMI in Amazon-speak). Simply put, its a 32-bit installation of Linux 2.6.34. The rest of the configuration: 613MB of RAM, up-to 2 EC2 compute units, 10GB of persistent storage on EBS. Amazon micro-instances provide a small amount of consistent CPU resources and allow you to burst CPU capacity when additional cycles are available. They are well suited for lower throughput applications that consume significant compute cycles periodically.

Installing lighttpd

Amazon's repository includes a pre-built version of lighttpd version 1.4.28 (latest release as of Feb 2011), so that's where we began. Installation was straight-forward:

yum -y install lighttpd
yum -y install lighttpd-fastcgi

While starting up lighttpd after this was no problem, it showed us the static welcome page when we visited the newly set up server. Making this work with our web application, based on Drupal and CakePHP was a bit involved. We were initially thrown off guard when yum created a fastcgi.conf file that looked a bit strange compared to numerous examples on the net. But eventually, we settled on not uncommenting any of the examples provided in the file, instead we added the following configuration:

File: /etc/lighttpd/conf.d/fastcgi.conf

fastcgi.server = ( ".php" =>
((
"socket" => "/tmp/php-fastcgi-1.socket",
"bin-path" => "/usr/bin/php-cgi",
"max-procs" => 1,
"broken-scriptfilename" => "enable",
))
)

We also uncommented out mod_rewrite in the list of standard modules along with uncommenting lines that included the conf files for magnet and fastcgi, all in the same file:

File: /etc/lighttpd/modules.conf

server.modules = (
"mod_access",
# "mod_alias",
# "mod_auth",
# "mod_evasive",
# "mod_redirect",
"mod_rewrite",
# "mod_setenv",
# "mod_usertrack",
)

#
# many more lines here...
#

include "conf.d/magnet.conf"
include "conf.d/fastcgi.conf"

In order to make pretty URLs and Apache equivalent redirects to work with the Drupal component of our application, we copied over Albright's Ultimate Lua script into our /var/www area and set it up:

File: /etc/lighttpd/conf.d/magnet.conf

magnet.attract-physical-path-to = ("/var/www/drupal.lua" )

This gave us a baseline installation of lighttpd that was compatible with Drupal and CakePHP.

Compressed output

The first issue, for our application, was that lighttpd was not compressing HTML output being sent to the browser. After a quick search it became clear that mod_compress was not going to do the trick for us since it works only on static HTML files. What we needed was mod_deflate, which wasn't readily available as part of any lighttpd 1.4.x release candidates (its planned as part of lighttpd 1.5). So, we had to build lighttpd 1.4.28 from source after patching in the required changes to enable mod_deflate that compresses dynamically generated HTML when being sent to the browser. The official mod_deflate wiki documentation does not list a patch for 1.4.28, however with minor modifications, the patch file for 1.4.26 worked on version 1.4.28. We've created a patch file for lighttpd 1.4.28 that enables mod_deflate which you can download here: lighttpd-1.4.28.mod_deflate.patch.

Build instructions

Grab the release tarball of lighttpd 1.4.28 from the official downloads page and apply the patch after extracting the tarball:

patch -p0 < lighttpd-1.4.28.mod_deflate.patch

Run the configuration script from your lighttpd-1.4.28 directory. We needed to have SSL and memcache enabled, so we used the following command, all in one line (assumes that all the -devel libraries have been installed for mysql, php, openssl, gamin, zlib, bzip2, lua, memcached. Also assumes libmemcache has been built previously).

./configure --prefix=/lighty_custom --with-mysql --with-zlib
--with-bzip2 --with-fam --with-memcache --with-lua
--with-openssl --with-gdbm

Build lighttpd from your lighttpd-1.4.28 directory:

make
make install

This will set up your custom version of lighttpd in /lighty_custom. To make this custom version of lighttpd work using the "service" command, make the following changes:

File: /etc/init.d/lighttpd

#exec="/usr/sbin/lighttpd"
exec="/lighty_custom/sbin/lighttpd"
prog="lighttpd"
config="/etc/lighttpd/lighttpd.conf"

Configure mod_deflate. Create a new file in your lighttpd conf.d directory:

File: /etc/lighttpd/conf.d/deflate.conf

server.modules += ( "mod_deflate" )

deflate.enabled = "enable"
deflate.compression-level = 9
deflate.mem-level = 9
deflate.window-size = 15
deflate.bzip2 = "enable"
deflate.min-compress-size = 200
deflate.work-block-size = 512
deflate.mimetypes = ("text/html", "text/plain", "text/css", "text/javascript", "text/xml")

Include the deflate.conf in the list of modules being loaded:

File: /etc/lighttpd/modules.conf

include "conf.d/deflate.conf"

Restart the lighttpd service. You should now see lighttpd compressing HTML output on dynamically generated pages from PHP.

So, what about the results?

While it took us several hours spread over a couple of days in figuring out how to set up an optimal lighttpd installation specific to our application, the benchmarking results were mixed. The first result was discouraging. Just browsing around in our application running on lighttpd seemed slower than earlier, when it was being hosted on Apache. Now 90% of the time, this can be attributed to a slow 512kbps broadband connection we have. Thus instead of relying on human experience, we used Apache Bench (ab) to run a simple benchmark test for us. This result too was discouraging.

The "ab" command we used for testing lighttpd (and Apache) without a compression enabled request (cookies to indicate logged in user):

$ ab -n 5 -C SESS39166d130128e59d9c9fa0b10f5052a6=nsjf0....
-C SESS533bfe3264360767532928302a142586=mmhkp....
https://ec2-XX-XX-XX-XX.compute-1.amazonaws.com/dashboard

The "ab" command we used for testing lighttpd (and Apache) with a compression enabled request (cookies to indicate logged in user):

$ ab -n 5 -C SESS39166d130128e59d9c9fa0b10f5052a6=nsjf0....
-C SESS533bfe3264360767532928302a142586=mmhkp....
-H 'Accept-Encoding: gzip'
https://ec2-XX-XX-XX-XX.compute-1.amazonaws.com/dashboard

While I know that 5 non-concurrent hits to a server hardly constitutes a test, higher numbers just failed with lighttpd. Even 10 non-concurrent requests failed with a timeout, so the usual suspect is our local broadband connection. We have to run this test on a reliable Internet connection that doesn't fail mid-way through a test with something like a 100 or even a 1000 runs one after the other. The other issue is the Amazon micro-instance is geared towards bursty utilization of CPU resources. With "ab" its one hit after another which sure ain't bursty. An AWS micro-instance doesn't like that very much! This results in CPU throttling thus timing out some executions mid-way.

A more accurate test would be to have a fat pipe to the Internet that can sustain continuous "ab" testing over a long periods of time, and to run our application on an Amazon AWS small-instance that provides a constant amount of computing power available to the application, instead of throttling when bursty.

Final disclaimer: Take these results with a pinch of salt! :)

 
lighttpd 1.4.28
Apache 2.2.16

 
(no compression)
(with compression)
(no compression)
(with compression)

Document Length
86863
86863*
86863
11247

SSL/TLS Protocol
TLSv1/SSLv3,AES256-SHA,2048,256
TLSv1/SSLv3,DHE-RSA-AES256-SHA,2048,256

Test Time (seconds)
49.166
32.130
26.344
22.578

Request/Second
0.10
0.16
0.19
0.22

* Command line argument -H 'Accept-Encoding: gzip' didn't seem to have an effect on compressed output, although we could tweak compression levels and watch it in Firefox/Firebug.

dhwanit

Posted on Dec 8 2010 by Dhwanit Category: Apps
0 comments

Setting up Apache (httpd), PHP with APC on an Amazon AWS Basic AMI

Disclaimer #1: This post is more of an online reference for me to be able to set up an Apache web server and PHP with Alternative PHP Cache (APC) enabled on a basic Amazon Web Services AMI (clean Linux box). Most of the commands don't have explanations -- I'm sure you'll be able to understand more by issuing "man <command>" at the Linux prompt.

Disclaimer #2: Commands provided here may be time sensitive if some apps or paths are changed over the due course of time. Don't leave a comment that something's not working if you're viewing this in 2015 :)

The commands

This post assumes that you are able to log into the Amazon AWS console, create instances and log in via SSH to those instances you've created. The commands below are all typed into an SSH terminal as root. None of the outputs have been recorded (including the prompt) -- makes it easy to copy off the commands one-by-one instead of scanning through the output to find the next command to copy.

yum -y install httpd
yum -y install httpd-devel
yum -y install mysql
yum -y install mysql-server
yum -y install php
yum -y install php-mysql
yum -y install php-devel
yum -y install php-gd
yum -y install php-xml
yum -y install php-bcmath
yum -y install php-xmlrpc
yum -y install php-pear

Before installing APC, the following may be needed. Its safe to try the APC installation before you install the modules below. If the APC installation fails, you can run the commands below to see if it'll help:

yum -y install pcre
yum -y install pcre-devel

To install APC, run the commands below:

pear install pecl/apc

Post installation commands: This ensures that Apache and MySQL run during startup if you ever reboot or stop/start your Amazon instance.

chkconfig httpd on
chkconfig mysqld on

Check if Apache and MySQL are scheduled to run at startup:

chkconfig --list

dhwanit

Posted on Jul 6 2010 by Dhwanit Category: Design
1 comments

How we built the Horlicks WizKids website

Horlicks WizKids is South Asia's largest interschool fiesta with over 200,000 children participating from four countries over a period of five months. “iWiz,” the online competitions section of the website, was built by WebrMedia to capture the buzz in a youthful looking website. iWiz is a platform for school children to participate and win online competitions for best photographer, best journo writer, best digital artist and various other categories.

The website was built over a span of six weeks. This period included the initial requirements gathering, design of the user interface, discussions on the user experience workflow and development of the server software. The website was launched after the culmination of three independent specialists who tested the website on the launch server.

Horlicks WizKids

Initial requirements

Week 0:

Being connected to the Internet all the time has its distinct advantages. Most importantly, client meetings are reduced (if you live in Bangalore or have an idea of its vehicular traffic, you’d understand why) and a lot of discussion happens over chat. The salient points are noted on-record in email conversations seeking client approval. Having approvals on-record is advantageous to both the client and us. The client need not worry about their ideas not being implemented on the website: if they’ve signed off on an approval email, we will implement their requirement on the website. We don’t have to worry about unnecessary scope creep that can potentially delay the website or add to the client’s cost of development.

For the WizKids website, the initial discussion took place in a face-to-face meeting which kickstarted a back-and-forth email discussion before we formally presented a proposal. The client took close to ten days to compare our proposal with those from other providers – a due diligence task we encourage our clients to do – before awarding the contract to us.

Week 2:

We learned at a later stage that the estimated cost we had quoted for developing the website was amongst the highest in all the proposals they received. The reason they chose us was the professional manner in which we collected the requirements, visually presented a mindmap of the website and provided a detailed estimate with individual break-up of hours required and costs involved for design, development, testing and deployment.

We do this for every enquiry we receive, regardless of whether we’re selected in the end. We’re proud to say we’ve achieved close to 80% conversion rate (till date) on the projects that we’ve bid on.

Design

The Horlicks WizKids website is a few years old. It had always been a static website with a “new” design each year. It contained a page for event schedule, a page for students to download PDF brochures & registration forms for the event, a page about the event itself and a contact information page. The dynamic component was spread over Facebook, Twitter and other social networking websites.

From 2010, EduMedia wanted to have a consolidated website to incorporate both the static information as well as the dynamic component into a single Horlicks WizKids website. We extended it by converting the dynamic component into a playground for children to engage in a participatory prize winning set of online competitions.

Week 3:

The design was a challenge. We had never worked on websites that had to first of all, integrate well with a “static component,” after which it had to showcase an informal, youthful appeal and finally, incorporate the requirements of the client in an intuitive easy workflow for children in grades 1 through 12. By researching the websites out there that are specifically meant for kids to use (PBS Kids, Discovery Kids, Nat Geo Kids etc.) and combining it with the client’s requirements, we arrived at a broad set of user interface goals, incorporated them into a wireframe and sent it to the client:

  • Primary navigation for dynamic iWiz components and secondary navigation for WizKids components; both navigation systems will be visible simultaneously.
  • An area for promoted content above the website fold. - An area for user generated content below the website fold.
  • System for selecting a city for user generated content on landing pages.
  • Individual landing pages for every city that will follow the promoted content above fold; user content below fold design.
  • Headings to use a “kiddy” font (for lack of a better term)
  • On sign-in, redirect the user to the online competitions area.

Horlicks WizKids website wireframe

Some design concepts making it into the approved visual prototype have been influenced from Yahoo Kids (“kiddy” font styles), Mivokids (color combination), Capstone kids (sky background).

Development

Week 4:

We at WebrMedia maintain multiple starting codebases of Drupal in our repository. The one suited best for Horlicks WizKids was social networking lite. Because we have these codebases & starter databases ready, we save a lot of time getting our work environment ready for the new website.

The website software included development of a custom Drupal module that maintains a countdown of event schedules as they move about geographically. For example, if an event is going on today in Bangalore and the next event is slated for Mumbai, the countdown widget shows both these as per India time, based on input from the database. Management is hands free.

Content moderation was very important for EduMedia. Any user generated content will go through an admin approval process before it gets published on the website. This way, EduMedia maintains the quality of the content published. We integrated the modr8 module with WizKids and provided a uniform administrative user interface for batch approval of content.

We also built in a role based control mechanism for posting new content. Administrators or staff of EduMedia do not need any approval to post messages; they also have the ability to control what content gets promoted or published. With the UI we built, pushing content above the fold or keeping it below the fold on the website was a matter of checking on or off a check-box in the administrative interface.

Testing & deployment

Week 6:

Test benchmarking was conducted using ab (Apache Bench) software. This software simulates multiple concurrent connections to a website to test its processing capabilities. The server in use is a quad-core Xeon based processor with 4GB of RAM, with a good chunk of its processing power available on demand to the Horlicks WizKids website.

We sustained test efforts for six hours using Apache Bench with an arbitrary number of concurrent connections ranging from 20 to 300 requests. During this process we ended up blacklisting our local IP address several times because the hosting provider programmed firewall thought it was the beginning of a DoS attack! (‘Twas good to know that the firewall worked!) We also ended up killing the server once: 134% processor load, 96% memory consumption, 99.7% swap space utilization, during the testing period.

Testing provided a means to gauge what blocks and pages could be cached and what need not be, thereby optimizing the website and arriving at a sustained processor and memory utilization to about 20% processor load, 45% memory consumption, 0% swap utilization on 300 concurrent connections.

Live!

The website went live on June 30, 2010. Horlicks WizKids – the event – kicked off 2010 with the first of seventeen Indian cities on July 3. Go to http://www.krayonevents.com/horlicks/ to see it in action.

dhwanit

Posted on May 17 2010 by Dhwanit Category: Design
0 comments

HTML and CSS tricks for good website design

Even the most experienced and best CSS/HTML designer out there does not have the vast myriad sets of rules and their combination governing the two languages in their heads! I'm no different either. So here's a list of online resource links that I visit from time to time when certain CSS or HTML tricks are needed during the development of a website.

Equal height columns using CSS

There are times when a 3 or 4 column HTML page needs to have equal height columns. While Javascript can achieve this quite easily, doing it only with HTML and CSS alone can get quite tricky:

Note: this post is a work-in-progress. That means, I use this as a bookmark and will keep updating it whenever I find new resources to work with.

akshay

Posted on May 4 2010 by Akshay Category: Apps
1 comments

How to detect the popup blocker in Chrome

The usual method for detecting popup blockers across most browsers is to use a snippet of code which reads something as follows:

<script type="text/javascript" language="Javascript">
 var popup = window.open('http://www.google.com');
 if (!popup || popup.closed
  || typeof popup.closed == 'undefined') {
    popUpsBlocked = false;
} else {
    popUpsBlocked = true;
}
</script>

Google Chrome relegating a window to popup hell!

Unfortunately, the above piece of code does not work in Google Chrome. This is because Chrome opens the popup, loads the page, and even parses all the Javascript on the "blocked popup". It just doesn't display the popup to the user. This behavior is unlike other major browsers (Internet Explorer, Firefox & Safari) which simply don't load the popup at all (i.e. they don't send out an HTTP request to fetch the page).

So, how do you detect Chrome's popup blocker? Here's one method which works using cookies and the window.innerHeight property available in Javascript's window object.

The great thing about cookies are that they are available to all tabs/windows open across a browser session within a particular domain name. What that means is that a cookie set by a script in one browser window on ".example.com" will be instantly accessible by a script in another browser window on another page on example.com.

The point regarding window.innerHeight is that Google Chrome returns all position and height values as 0 when the popup is blocked. But, it also returns zero values when the popup is actually opened for an unpredictable amount of time, until the DOM fully loads. After a couple of seconds, the window.innerHeight returns the correct values.

The way we actually use these features (and browser quirks ;) to detect the popup blocker in Chrome is:

  1. Popup your window (see further code below) .
  2. popUp('/path/to/popup.html', 'uniquePopupWindowName');

     

  3. Set a timer to run the popupCookieCheck() function after a preset amount of time.
  4. var cookieCheckTimer = null;
    cookieCheckTimer = setTimeout(
    'popupCookieCheck(\'uniquePopupWindowName\');',
    3500);

    function popupCookieCheck (windowName) {
    if (!checkWindow(windowName)) {
    if (popupBlockerWarningDisplayed == false) {

    alert('Popup blocker detected!');
    popupBlockerWarningDisplayed = true;
    }
    return false;
    }
    return true;
    }

    function checkWindow(windowName) {
    theCookie = getCookie(windowName);
    if (theCookie != null) {
    if (theCookie == 1) {
    return true;
    }
    }
    return false;
    }

    function popUp (url, windowName) {
    window.open(url, windowName,
    'width=100,height=100,location=0,
    toolbar=0,status=0,scrollbars=0');
    }

    function getCookie (check_name) {
    var a_all_cookies = document.cookie.split(';');
    var a_temp_cookie = '';
    var cookie_name = '';
    var cookie_value = '';
    var b_cookie_found = false;

    for (i = 0; i < a_all_cookies.length; i++) {
    a_temp_cookie = a_all_cookies[i]
    .split( '=' );
    cookie_name = a_temp_cookie[0]
    .replace(/^\s+|\s+$/g, '');
    if (cookie_name == check_name) {
    b_cookie_found = true;
    if (a_temp_cookie.length > 1) {
    cookie_value = unescape(
    a_temp_cookie[1]
    .replace(/^\s+|\s+$/g, ''));
    }
    return cookie_value;
    }
    a_temp_cookie = null;
    cookie_name = '';
    }

    if (!b_cookie_found) {
    return null;
    }
    }

  5. In the popup window, set a cookie if the window.innerHeight is greater than 0.
  6. <head>
    <script type="text/javascript" language="Javascript">
    function setWindowCookie () {
    document.cookie='uniquePopupWindowName=1;path=/';
    }
    </script>
    </head>

    <body onload="if(window.innerHeight!=0){setWindowCookie();}">

  7. The popupCookieCheck() function runs on the main page after the cookieCheckTimer runs (3500ms in our case) and checks if the popup has been displayed. If not, it displays a message asking the user to enable popups for your website.

The major disadvantage to using this method is that the popup window has to be on the same domain (hint: use iframes). But at least, you have popup detection in Google Chrome, plus it's cross-browser compatible! Yay. :P

PS: I'm sure you can make the code prettier and function better. These snippets are just to get you started. Some of the abrupt linebreaks are to fit the code into the fixed width layout.

PS#2: Don't spam people! It's not cool.

dhwanit

Posted on Apr 26 2010 by Dhwanit Category: Design
0 comments

HTML 5: The future of rich-media web

In the ever-changing world of the web, its surprising to see how long HTML 4.x has held on to its top spot as the language of choice for developing websites. If "4.x" drew a blank look, here's a (very) brief history of the web:

HTML, or Hyper-Text Markup Language was invented in 1992 by British computer scientist Tim Berners-Lee, followed by a flurry of activity on improving the fledgling language under the aegis of the IETF and W3C - both international, nonprofit organizations tasked with developing HTML and the world-wide web. In 1995, specifications for HTML version 2.0 were published and January 1997 saw version 3.2 of HTML being released. By 1998, HTML version 4.0 (HTML 4.0) was out and 1999 was the birth year of HTML 4.01 with minor updates, both collectively referred to as HTML 4.x.

This period also saw the browser wars being fought out (Internet Explorer vs. Netscape Navigator) which played a major role in fueling HTML's rapid development. By 2000, Internet Explorer, bundled free as part of the Windows Operating System had won the browser wars by driving Netscape out of business and claiming 90%+ share of browser users!

After this, everything seemed to freeze in Internet technology development, following the massive Internet bubble burst through 2000-01.

Seeds of HTML 5

HTML 5

It wasn't until 2004 when a WHATWG, a working group under W3C, started working on an update to HTML 4.01, which by now had been published as ISO/IEC 15445:2000 standard. Originally titled Web Applications 1.0, the updated specification was published by the W3C in January 2008 as a working draft, with a new name: HTML 5. As of April 2010, this update has not been published as a W3C recommendation, meaning wide-spread adoption is still some time away.

So, why are we talking of HTML 5 when a world standards body has not recommended it for general use?

Enter Browser Wars 2.0

The scenario looks very similar to the browser wars of the late '90s. Let's call it Browser Wars 1.0. Back then, when the standards body was methodically working on the HTML language, the two dominant browsers - Netscape Navigator and Internet Explorer added all sorts of silly markup tags like <blink> and <marquee> to their implementation of HTML, in a death fight for who gets to keep the biggest pie of Internet users. Browser Wars 1.0 was fought outside of the W3C recommendations.

This time around, Browser Wars 2.0 includes the W3C. Although HTML 5 has been published only as a working draft, browsers such as Google Chrome, Apple Safari, Mozilla Firefox and Opera have already started implementing HTML 5 support. Chrome and Safari with the widest coverage of HTML 5 features comes as no surprise. The two biggest proponents of HTML 5 are Google and Apple. The current editor of HTML 5 is Ian Hickson, a Google employee; Apple's rendering engine software: WebKit forms the basis of both Google Chrome and Apple Safari browsers. How's that for collaboration?

Microsoft is a bit late, but nevertheless, is also leaning towards partial HTML 5 implementation in their upcoming Internet Explorer 9.0 browser. From over 90% share of the browser market when it won Browser Wars 1.0, Internet Explorer has steadily dropped to just 65% as of early 2010. This drop has been Microsoft's wake up call to enter Browser Wars 2.0 :-)

Unlike Browser Wars 1.0, which was a skewed fight between a small start-up and the giant Microsoft, Browser Wars 2.0 promises to be an equal fight centered around three big players: Google, Apple and Microsoft.

Why HTML 5?

One of the biggest evolutions to the language in version 5.0 is native support of rich media such as video and audio.

HTML is a markup language that has a lot of tags that tells a browser how to render a particular component in a web page. HTML 4.01, the current web standard since 2000, has no means of defining rich media components. This is where Macromedia (acquired by Adobe) Flash stepped up to the plate. Steadily through 2000, as broadband proliferation exploded throughout the world, web sites became more and more media rich, incorporating video and audio into web pages with the help of Adobe Flash.

However, Adobe Flash is a proprietary platform and the W3C, a world standards body had to find an open alternative to it. This is where HTML 5 adds new tags to the language that allows web developers to embed video and audio using open web standards. Tags like <video> <audio> and <source> are new in HTML 5; browser support for which already exists in the likes of Firefox, Safari and Opera.

Large websites like YouTube already use HTML 5. Browsers on extremely popular Internet devices such as the iPhone and iPad that lack Flash displaying capabilities, support rich media HTML 5 tags natively.

Summary

In my opinion, the verdict on HTML 5 is still some time away for most web users. But with very strong proponents like Google and Apple behind the scenes propelling its development, combined with the exponential growth of low computing-power mobile Internet devices that cannot handle Flash's processing & memory hungry requirements, rich media websites developed using HTML 5 does look like the direction of the future.

akshay

Posted on Apr 21 2010 by Akshay Category: Apps
0 comments

Callsheet Management, simplified!

We're proud to release Simple Callsheet Manager into the public sphere as freeware! It's a nice little online collaboration tool which can be used by sales teams in different locations (or in the same location, if you want). You could also probably use it as a rudimentary CRM system while you're trying to bootstrap your start-up. I know we do! ;)

If your sales or marketing teams are usually grouped into small teams, this nifty little app we coded over a weekend might just save you a lot of money! It enables small teams to manage their callsheets online, which could be in the form of phone conversations, emails or meetings. The sales executive takes down notes and sets a "follow up" date.

Each sales executive gets their own worksheet which is automatically populated with reminders regarding follow ups and other due dates. Every user can track the progress of their worksheet events and goals by using a simple 0-100% scale.

Executives can be grouped into teams which work on callsheet threads collaboratively. Teams can view other teams' worksheets to gain an insight into what the overall progress of a campaign is, and the best way to approach a follow up call with a client.

Please download and use it for free* and give us your feedback to help improve the software. If you'd like us to setup this application for you on your server, or host it for a small fee, get in touch with us and we'd be glad to help.

* Free as in "free beer". No strings attached :)

dhwanit

Posted on Apr 21 2010 by Dhwanit Category: Design
0 comments

Don't make me think!

I came across this book that was first published back in 2001 that spoke about website usability. According to the author Steve Krug, the most fundamental usability rule is "Don't make me think."  A lot has changed in the last ten years, including browser technology improvements, better last mile connectivity, an explosive growth in broadband access and larger screen resolutions. However, this rule is as absolutely relevant today, as it was ten years ago.

What exactly does "don't make me think" mean? It means that as far as possible, when someone looks at a web page, he or she should intuitively be able to start using it with minimal thought or effort. The web page should be self-evident. Obvious. Self-explanatory.

Making a web page self-evident isn't as difficult as you might think it to be. People are conditioned to using the web, having browsed hundreds of websites over the years, they have subconsciously picked up universally accepted elements of web design. As long as your website's design follows similar principles, your users will a pleasant experience browsing through its pages.

Take for example, clicking on website logo. Most of us are so used to clicking on the logo to return to the homepage that if it doesn't work, or takes us to some other page, there is a bit of confusion... and that makes us think!

Similarly, having a logout or sign out link accessible from the top-right corner of the page is also embedded in our subconscious. Which website was the first to put this on the top-right corner is irrelevant. What's important is that quite a few of the websites we use on a day-to-day basis keep their sign-out link accessible from somewhere in the top-right region of the page.

Examples:

Signout or logout button on top-right of web pages

Elements that make us think

A website is a fusion of so many different things. They contain text, images, animation, audio, video, forms, menus, popups or a combination thereof. A whole bunch of things on any website makes us pause and think. They may be associated with visual depiction of certain elements. Or be associated with clever sounding names that are not immediately apparent.

An example mentioned in the book on a scale of "obvious" to "requires thought" is when someone is scanning a company page for a list of jobs.

  • If a link says "Jobs" it is self-evident. Click.
  • If the link states "Employment Opportunites" then the user thinks for a fraction of a second before understanding it to mean "Jobs." Click.
  • If the link says "Work with us," there's a significantly longer pause for thought. The user is thinking 'Hmmm... Will that link take me to a list of jobs available or would it take me to a partner page? Should I click to find out?'
  • Its worse if a link says "Job-o-Rama." The user will have no earthly idea what that means. It may be used commonly within your organization but its not going to be obvious for a user looking for a job in your company.

Take another example: A choice of visual elements displayed on a web page, whose functionality is to mimic a button. If you were faced with these four choices, which one would you click immediately?

Having a user in the third, or the fourth situation above, isn't conducive to the website's usability. Its a sure-shot way of losing the user's interest very quickly.

Designing for the average user

Not everything on your website can be self-evident. Depending on your business needs, or your website's goals, helpful tips can be inserted into the layout as text, videos, popups etc., for the user to refer to, and help focus his or her thoughts towards effective usage of your website. This makes the website self-explanatory.

Some users spend a surprisingly long time fighting with, and figuring out a website that's frustrating them. They tend to blame themselves for their inability to use the site, instead of blaming the site's obtruse design! It may be because they spent a considerable amount of time finding your site in the first place, or they may not know of any alternative. Starting over isn't always attractive in these cases.

The average Internet user will give your website a chance. He or she will spend a reasonable amount of time in understanding its functionality and figuring out how to use it. Its these users, an overwhelming majority of web surfers, that your website should cater to. Make their stay comfortable. Let them have a pleasant experience. Having a well designed, well thought out, self-evident or self-explanatory website for the average Internet user should be your company's goal. We at WebrMedia follow Steve's advice on all the websites we design... don't make the user think.

I will end this post with a few words from the book: "Making pages that are obvious is like having good lighting in a store: it just makes everything seem better. Using a site that doesn't make us think about unimportant things feels effortless, whereas puzzling over things that don't matter to us tends to sap our energy and enthusiasm - and time."

Virtualize your business processes!

Here's music to your ears: Our thorough knowledge of integrating CMSs such as Drupal, with application frameworks like CakePHP, can help you take the risk out of choosing and implementing a CMS solution for your website.