Web Monitoring

Web Site Monitoring and Performance Insights

Web Performance Measurement: Measuring what users experience

Just a few years ago, when broadband envy was a common condition and DSL was considered high-speed, testing Web site performance was largely an internal affair. Follow basic best practices in building the site, make sure the servers were up and the pipes open, and you were ready for business. Consumers coming off the painful slowness of dial-up had far lower expectations and far greater patience.

But that all seems like such a long time ago. While the U.S., at 60 percent broadband penetration, ranks 20th (!) globally (far behind South Korea’s 95 percent), a solid majority of U.S. users now have fast connections.x They’re paying for speed, and they expect sites to be fast. Add in rich functionality, video, Flash, etc., and you’re looking at an experience made or broken by site performance — performance that can no longer be effectively measured from the inside out.

Believe it or not, some major U.S. retailers still do not have an organized regimen for external testing of their sites, or for determining if they can handle a heavy surge in holiday traffic.

In the broadest sense, testing falls into two categories: ongoing evaluation and tweaking to optimize daily performance, and peak load testing to determine overall site capacity and potential breakpoints.

In either case, the only way to quantify user experience is to measure what users are actually experiencing. Unless you’re the local bike shop serving only your immediate area, the testing needs to be done across a wide geography and multiple backbones. And it needs to use an actual Web browser, and go through the same type of page view sequences and transactions as a typical user would. There’s simply no other way to get a true perspective on what users are really experiencing.

Source: http://keynote.com/benchmark/online_retail/christmas_article.shtml

October 4, 2011 Posted by | Web Load Testing, Website Performance | , , , , | Leave a comment

Website Performance and Cross-Browser Compatibility

Cross-browser compatibility is a priority that can no longer be ignored. Balance between pre-release quality assurance testing and ongoing monitoring of a production site in a cross-browser environment needs to be understood.  To prove website performance in every major browser, cross browser issues need to be identified and resolved while the site is being built. This ensures pages are loaded correctly across which ever browser the user uses.

The ultimate purpose of website performance monitoring is to monitor and keep up the robustness of the entire site infrastructure – performance and availability.

Many of these issues are browser-agnostic – if the server has an outage, it’s out for all browsers – but some do involve specific browsers. Microsoft recommends developers use a strategy of detecting for specific features rather than for different browsers. Whichever strategy is used, the most critical step in the multiple-browser development process is detailed, comprehensive pre-release quality assurance testing.  A very thorough solution includes end-user monitoring using live installations of the two leading browsers, IE and Firefox. Regular scheduled testing with an alternate browser needs to be supplemented with continuous monitoring with one of the major browsers.

September 20, 2011 Posted by | Web Performance Testing, Website Performance | , , , | Leave a comment