Tuesday, December 24, 2013

Collaborate or take orders?

"If you can't negotiate about the work, you aren't collaborating, you are taking orders." - Ester Derby (@estherderby)

This is exactly why I left my last job.

During the 5 years I was there I heard the word collaborate so many times that I became numb to any affect it might have on me. At some point it became a big joke because the reality was some completely different. Here is what "collaborate" really meant where I worked and see if any of this resonates with your experiences.

The upper managers "collaborate" with each other then disseminate their plans to those of us who are lower not the org chart.

Questioning things in the "agile open workspace" may be seen as insubordination and ruin your career, you just don't know which question will have that affect.

The ceremonies of stand-ups, planning, retros, tasking, sprint reviews, etc all feel more like dogma than effective development practice.

The most important information regarding the project is closely guarded and held until we "need to know".

There are pre stand-up meetings to try and sanitize the news for the real stand-up.

The idea that came out of the software conference is not welcome in our project because no one else in the company could support it in production.

Making any changes at all to your workstation without express permission could get you fired.

After many months of prototyping, experimenting, and talking with customers the realization comes that the best way to build something great is to start over completely. This news is so unwelcome no one will take the risk to say it except at the bar.

Responding with "How do you know they are best practices?" causes the product owner to pull rank.

Talking to a customer directly is punished and dire warnings are sent out by email that we must "Filter everything through me" and "We need to tell a consistent story to the business"

Command-and-control may be the kryptonite of collaboration.

Monday, November 29, 2010

Speeding up Web pages with Expires Headers

We have an application where I work that is used extensively for business reporting. The audience for these reports include all of the top management of the company, including the CEO. The problem we face is that it’s performance is slow for some of the most important users and they are very vocal about that problem. A few of us have been working on how to speed up the user experience but where do we look first? In this article I will tackle one of the simple solutions we found and how we proved to ourselves that it really did make things faster.

This particular application is all web with no client side installation necessary. The fact that it is a reporting tool is not nearly as important as the fact that it is a full web application with all the benefits and the downsides to go with it. Another important fact is that the users are most often over 2000 miles away in another office and their experience is much worse than the users where the app is hosted. Those remote users however are very important customers and include those who report to the CEO of the company. Suffice it to say that when they complain it is heard loud and clear.

So where does one begin in the effort to increase the speed of a web app? In our situation the network stood out as a primary candidate since the speed was better in the main office than in the remote office. Our network folks did their own analysis and decided to recommend and upgrade from 45 megabits to 1000 megabits (1 gigabit) and we all thought this would be the answer to the problem. This represented a speed increase of over 22 times the speed of our old connection and surely would fix the problem, but as it turned out it did not. The measurements we had been taking every day showed that the speed remained slower for the remote users by a factor of 4-5 so if a user in the main office saw response times of 4 seconds the remote users would see 16-20 seconds for the same page. So we had to start looking elsewhere.

During this same time frame some other colleagues turned me on to the recommendations written up by Yahoo that they called “Best Practices for Speeding Up Your Web Site”. While digging into that information it was revealed that some surprisingly large number of response time issues in web apps were due to client side issues not deep in the tiers of the application server and database. They go on to list out the changes one can make to improve speed and even provide some ways of measuring how bad the problems might be. Armed with this new (to me) information off I went to analyze and make recommendations on how to improve performance.

The first tool I tried was the Yslow addon that works with Firebug and Firefox. This tool gives grades for pages based on the Yahoo recommendations, just like those grades we got in school, A is the best and F is failing. In our case we got a lot of D’s and F’s because of lack of knowledge when setting up the servers.. Some of the recommendations were not possible for us to use because it’s not a public application and we purchased it from a well known vendor so any changes to the internals of the app were beyond our capability. However based on some the failing grades we could make some simple setting changes on the web server and possibly get a speed increase. The problem is how do we know when things have gotten faster and how much faster?

The first rule of experimentation is to limit yourself to only 1 change at time in order to observe the affect of that change. It is tempting to go and make a whole bunch of changes at once but doing this make it next to impossible to tell which of the changes contributed positively to the speed and which ones negatively. This lead to a simple approach of changing just one setting on the web server and then measuring the difference. We chose to start with one of our grades of F on the area of adding expires headers to the content. Yahoo in this case recommends what they call “far future” expires for static content. In other words the content that does not change very often like pictures, icons, css, javascript, etc, should get an expiration date that is a long way out like 10 years. Just to see how this might affect things we made the change on the development environment and run some tests. The procedure seemed simple enough:

  1. Set the setting the web server to add expires header to the static content
  2. Clear the browser cache completely to simulate a new user on the site
  3. Using the to measure the speed and give the grades, load the page the first time and observe the speed
  4. Navigate to a different page
  5. Navigate back to the page in question and observe the speed again.
  6. The difference in speed should be the improved performance.
The problems arose quickly when I found out that most of the users on this particular web app were using Internet Explorer most of the time and most did not know Firefox even existed. I then had to switch my experimentation to another set of tools that work with IE. Since I am focused on free and/or cheap tools I ruled out some that I could not justify the cost. One that stood out was a tool called dynaTrace AJAX Edition HttpWatch also gets an honorable mention but the full version costs money and the free version is a little to limited for most uses. dynaTrace AJAX Edition also provided grades and recommendations but adds much more detail beyond Yslow and HttpWatch.

I decided to modify my experiment because we have two environments to test on, QA and DEV, so we could use expires header on one environment and leave them off on the other. This would give us side by side comparisons without having make changes over and over again. The second change I made was to define a set of test steps to follow so that we can repeat the same click-stream on each environment, again keeping everything as constant as possible. Another change to my procedure was to make sure to perform the steps at least once on each environment to make sure the application server was primed and ready since the first time using the app after a restart is very slow as the objects get loaded into memory on the server, then all the subsequent requests are faster (a subject of another article). The new procedure looks more like this:

  1. Set the DEV environment to use Expires headers with an expiration of 90 days.
  2. Set the QA environment to the same as PROD using no Expires headers
  3. Restart the servers if necessary
  4. Run through the click-stream script once to wake up the servers manually or with your automated script in Selenium or other automation tool (a subject of another article)
  5. Clear the cache on the browser.
  6. Run through the click-stream script and let dynaTrace AJAX Edition takes it’s measurements.
  7. Repeat the click-stream script again and take a second set of measurements. Optionally close the browser and re-open just to make sure it’s using a permanent cache, not just in memory.
This modified procedure should produce a long response time right after the cache is cleared because the browser needs to download all of the components. The second run through of the click-stream script should produce noticeably different speeds than the first because the cache is primed with non-expiring content and the browser only needs to load those components that do not have Expires headers. The difference between the two speeds will show how much of an improvement Expires headers made. Here is a great article on how the browser cache works “Best Practices on Browser Cacheing”

The results of one of my experiments showed a marked improvement in page times. The tool breaks out a few measurements First Impression time, Onload Time, and Fully loaded time and in each of those I saw in improvement:



Expires DisabledExpires EnabledDelta
RankF(0)A(100)upgraded from F to A
First Impression time (ms)2826839-1.987 seconds or 337% faster
Onload Time (ms)35351715-1.820 seconds or 206% faster
Fully Loaded time37461930-1.816 seconds or 194% faster
Remarks69 requests, 63 uncached70 Requests63 requests cached/

Comparing the results from the two configurations we can see that the user experience is greatly improved by having a primed cache. In practice a user would have slow response times with an empty or expired cache then fast after their cache was loaded with the static content. Using expiration time frames as long as 10 years may not work well with every app which is why we decided to use 90 days in our experiment.

dynaTrace results without Expires headers (sreenshot)

dynaTrace result with Expires headers (screenshot)


Conclusion:
Setting content to use Expires headers can measurably increase the performance of web pages. This increase is limited to return visitors to a page or web site and does not help new visitors as much. In this particular case of a web site that is internal to a company and has nothing but return visitors utilizing cache control headers show a marked increase. There are many other steps that can be taken to improve web site performance but this one represents an easy setting change because it does not require any deep dives into the application code and logic. To know for sure if this setting helps or hinders performance take measurement that reveal the actual response times and compare them with and without the setting enabled.