I took some pictures from my window earlier this week at work and decided to post them here to test out the new gallery feature built into WordPress 2.5. Let’s see how this works.
I just upgraded my blog to WordPress 2.5 RC2 and everything stopped working. Instead of getting my blog or the admin screen, I got the following error message:
Fatal error: Call to a member function add_query_var() in taxonomy.php
I discovered the root-cause of the problem and it is the wonderful Simple Tags plugin. Disabling the plugin restores WordPress back to it’s own awesome self. Simple Tags is an awesome plugin that allows you to manage tags within WordPress.
I know the latest release of WordPress, v2.5 is not officially out but I have been running WordPress 2.5 from their Subversion repository trunk for about the last week. My initial thoughts on the 2.5 release are very positive and all of my plugins and themes have worked without any major changes. Most plugin and theme authors are already busy upgrading their stuff to the latest code.
Some of the new features include a customizable dashboard, multi-file upload, built-in galleries, one-click plugin upgrades, tag management, built-in Gravatars, full text feeds, and major performance improvements. Apparently, the Automattic crew has been working with the folks at Happy Cog — Jeffrey Zeldman, Jason Santa Maria, and Liz Danzico — to redesign WordPress from the ground-up. The result is a new way of interacting with WordPress that will remain familiar to seasoned users while improving the experience for everyone. It’s more than just a new CSS – it’s a very nice redesign of the user interface that may require a little time to get familiar with, but it’s worth the effort as the new interface is very user-friendly, slick, and powerful.
Do this at your own risk as this is still pre-release software, but if you want to run the latest development trunk of WordPress, use the checkout command (svn co) to get the latest code and then an update once you get the latest code to make sure you are getting the latest code (svn update).[source:xml]
svn co http://svn.automattic.com/wordpress/trunk/ .
A couple of months ago, I noticed that I was getting pretty close to using up all of my monthly bandwidth allocation for my server and that was a surprise. I run several blogs that get quite a few hits but I didn't think I was anywhere near going over my 250 GB allotment. So I decided to spend a little time to optimize my server and figure out the best way to utilize what I had and optimize it to get the most performance out of my little box. Jeff Atwood's wonderful blog entry about Reducing Your Website's Bandwidth Usage inspired me to write about my experience and what I ended up doing to squeeze the most out of my server.
I had done some of the obvious things that people typically do to minimize traffic to their site. First and foremost was outsourcing of my RSS feeds to FeedBurner. I've been using FeedBurner for several years now after I learned the hard way how badly programmed a lot of the RSS readers were out there. I had to ban several IP addresses as they were getting my full feed every 2 seconds – Hoping that was some bad configuration on their side but who knows. Maybe it was a RSS DOS attack :). After taking a little time to see what was taking up a lot of the bandwidth, I discovered several things that needed immediate attention. First and foremost was the missing HTTP compression. Looks like an Apache or PHP upgrade I did in the past few months had ended up disabling the Apache module for GZIP compression and so all the traffic was going out in text. HTTP Compression delivers amazing speed enhancements via file size reduction and most if not all browsers support compression and so I enabled compression for all content of type text/html and all CSS and JS files.
Some older browser don't handle JS and CSS compressed files but anything of IE6 seemed to handle JS/CSS compression just fine and my usage tracking (pictured above) indicated that most of my IE users were using IE 6 and above.
Enabling HTTP Compression compressed my blog index page by 78% resulting in a statistical performance improvement of almost 4.4x. While your mileage may vary, the resulting performance improvement got me on the Top20 column at GrabPERF almost every single day.
Another issue I had was the number of images being loaded from my web server. As most of you already know, browsers will typically limit themselves to 2 connections per server and so if a webpage being loaded has 4 CSS files, 2 JS files and 10 images, you are loading a lot of content over those 2 connections. And so I used a simple CNAME trick to create an image.j2eegeek.com to complement http://www.j2eegeek.com and started serving images from image.j2eegeek.com. That did help and I considered doing something similar for CSS and JS files but decided instead to outsource image handling to Amazon's S3.
Amazon's S3 or Simple Storage Service is a highly scalable, reliable, fast, inexpensive data storage infrastructure that is fast and relatively inexpensive. S3 allows you to create a 'bucket', which is essentially a folder that must have a globally unique name and cannot have any sub-buckets or directories and so it's basically emulates a flat directory structure. Everything you put in your bucket and make publically available is accessible via http using the URL http://s3.amazonaws.com/bucketname/itemname.png. Amazon's S3 Web Service also allows you to call it using the HTTP Host header and so the URL above would become http://bucketname.s3.amazonaws.com/itemname.png. You can take this further if you have access to your DNS server. In my case, I created a bucket in S3 called s3.j2eegeek.com. I then created a CNAME in my DNS for s3.j2eegeek.com and pointed it to s3.amazonaws.com. And presto – s3.j2eegeek.com resolves to essentially http://s3.amazonaws.com/s3.j2eegeek.com/. I then used John Spurlock's NS3 Manager to get my content onto S3. NS3 Manager is a simple tool (windows only) to transfer files to/from an Amazon S3 storage account, as well as manage existing data. It is an attempt to provide a useful interface for some of the most basic S3 operations: uploading/downloading, managing ACLs, system metadata (e.g. content-type) and user metadata (custom name-value pairs). In my opinion, NS3 Manager is the best tool out there for getting data in and out of S3 and I have used close to 20 web based, browser plug-in and desktop applications.
In addition, I also decided to try out a couple of PHP Accelerators out there to see if I could squeeze a little more performance out of my web server. Compile caches are a no-brainer and I saw decent performance improvement in my PHP applications. I blogged about this topic in a little more detail and you can read that if you care about PHP performance.
The last thing I did probably had the biggest impact after enabling HTTP compression and that was moving my Tomcat application server off my current Linux box and moving it to Amazon's EC2. Amazon's EC2 or Elastic Compute Cloud is a virtualized cloud of computing available to you for $0.10 per hour of CPU utilization. I've been playing around with EC2 for a while now and just started using it for something real. I have tons of notes that I taken during my experimentation with EC2 where I took the stock Fedora Core 4 images from Amazon and made that server into my Java application server running Tomcat and Glassfish. I also created my own Fedora Core 6, CentOS 4.4 image and deployed them as my server. My current AMI running my Java applications is a Fedora Core 6 image and I am hoping to get RHEL 5.0 deployed in the next few weeks but all of that will be a topic for another blog.
In conclusion, the HTTP Compression offered me the biggest reduction in bandwidth utilization. And it is so easy to setup on Apache, IIS or virtually any Java application server that is it almost criminal not to do so. 🙂 Maybe that's overstating it a bit – but there are some really simple ways to optimize your website and you too can make your site hum and perform like you’ve got a cluster of servers behind your site.
As I deployed more applications and web sites on my server, I started running into resource issues. Since most of the applications I write are in Java, I run Tomcat on my Linux server. But I also run Apache as a front-end host for Tomcat as well as several PHP applications like WordPress, Vanilla and a few other PHP applications that I’ve written. I am not an expert PHP developer by any stretch of the imagination but I tinker with enough PHP that I decided to take a look at PHP Acceleration software.
For the uninitiated, PHP is a scripting language that is interpreted and compiled on the server side. PHP Accelerators offer caching of the PHP scripts in their compiled state along with optimization. There are several PHP optimization products out there and I decided to give eAccelerator, XCache and APC a try on my Linux machine. For the record, the box is running CentOS 4.4 which is essentially a distribution that is repackaged Red Hat Enterprise Linux 4.x.
- eAccelerator – eAccelerator is a free open-source PHP accelerator, optimizer, and dynamic content cache. It increases the performance of PHP scripts by caching them in their compiled state, so that the overhead of compiling is almost completely eliminated. It also optimizes scripts to speed up their execution. eAccelerator typically reduces server load and increases the speed of your PHP code by 1-10 times.
- XCache – XCache is a fast, stable PHP opcode cacher that has been tested and is now running on production servers under high load.
- APC – The Alternative PHP Cache (APC) is a free and open opcode cache for PHP. It was conceived of to provide a free, open, and robust framework for caching and optimizing PHP intermediate code.
I compiled and installed these PHP accelerators and found APC worked the best for me. XCache seemed to work well and actually provided a nice admin application that lets you peek inside the cache to see what’s cached, the hit/miss ratio, etc. eAccelerator also seemed to work well and offered a great performance boost but caused segmentation fault and made the Apache web server unusable. It could have been bad PHP code that was causing the segmentation faults but I didn’t really spend any times getting to the root cause. APC just worked, pretty much like XCache but seemed to offer a little better performance. Now I didn’t really perform any empirical testing here â€“ I simply relied on my website monitor GrabPERF as I ran each PHP extension for a few days. Your mileage may vary based on your server architecture, application, lunar phase, etc but PHP APC seemed to work the best for me.
Gavin Winslow is beating the odds and is about to celebrate his first birthday. Gavin is my wife’s cousinâ€™s son and he was born on February 23, 2006 with end-stage renal failure, or congenital kidney failure. Gavy was born with only 10-15% function in one of his kidneys, and 0% functions in the other. He was admitted to Childrenâ€™s hospital of Wisconsin immediately after his birth and started on peritoneal dialysis.
Gavin is a fighter and a tough little guy who is beating the odds every day as he gets ready for his kidney transplant. Gavin’s family and friends have come together and helped raise $95,778.67 (as of 2/15/07) to help offset the cost of Gavin’s transplant and transplant-related expenses. Visit www.savebabygavin.com for more details on Gavinâ€™s journey.
Gavinâ€™s mom Jill and my wife Kristin put together this video to celebrate his first birthday. Check it out and visit www.savebabygavin.com and donate to help Gavin celebrate many more birthdays.
A lot of you read this blog using an RSS reader and so you probably don’t see the theme that adorns this blog but I just switched the theme that powers this blog to the NigaRila theme by Sadish Bala. I have been looking for a great 3-column theme and Sadish has created one of the best looking and usable theme out there.
NigaRila is an awesome theme for WordPress 2.0 that has 3 columns on the Front Page with a fixed width of 900 pixel and 2 columns on all other pages. This theme has two sidebars on the right side. If you have the sidebar widgets plugin installed, then you can use it for both of them. NigaRila is an awesome theme that produces valid XHTML and offers a great deal of functionality. I’ve made a couple of modifications to add support for a few other plugins but most of the functionality you see on my blog is out of the box including the archive and contact page. Sadish wants $15.00 for this theme and I think its well worth the cost.
In addition to NigaRila, Sadish just recently created a new WordPress theme called Intense after learning about my wife’s first cousins son Gavin Winslow. Sadish was moved by Gavin’s story and decided to help by adding a link from his theme to Gavin’s site at www.savebabygavin.com. This has resulted in Gavin’s site getting thousands of visits from people that normally wouldn’t know about Gavin. Thank you Sadish for helping raise awareness about Gavinâ€™s story and bringing additional visibility to his site and creating a great WordPress theme in the process.
The day started off fairly normally – Check GMail for anything that needs immediate attention, then move to blog stats and then hit GrabPERF to see how my sites are behaving. And much to my very pleasant surprise, my blog made it on the front page of GrabPERF and it was on the good (Top 20 performance) side, not the bad side. 🙂 Check out the screenshot below and squint really hard to see Vinny’s blog in there at #17 with an average page load time of 0.4739 seconds.
If you haven’t heard about GrabPERF, it is an awesome free (community supported) service created by Stephen Pierzchala that provides distributed measurement services and monitoring for tracking key performance benchmark of many sites including my blog. The GrabPERF agents gather detailed component, page size, and response code data for the sites they monitor on a regularly scheduled interval and ship it to the central database in real-time where it is available for presentation in the GrabPERF interface. I’ve been a big fan of GrabPERF for a while now and use it as THE key measure for my sites performance.
It was really exciting to see the latest performance results as I’ve had a rough couple of months in terms of hosting. For a while, I had this blog hosted at Kattare to see how their Java/JSP (Tomcat) hosting services stacked up. While my blog was hosted there, I ran into a bug in the awesome Ultimate Tag Warrior 3 WordPress plugin where the plugin filled up my wp_postmeta database table with empty value of meta_value for every post that was viewed and didn’t have one of more tag defined. I had just started using the tag plugin and so not all my posts had tags defined and with the traffic I get, I was causing major issues for Kattare’s shared MySQL database server and so they disabled my site. Since I already had an account with TextDrive and a reliable daily MySQL backup, I moved my site to TextDrive. With the Ultimate Tag Warrior plugin fix, my blog worked for a while but it started causing problems for the folks at TextDrive as I was using the shared hosting feature and the traffic I was getting was adversely affecting other people on my server. So I decided to move to A Small Orange to check out one of their virtual (VPS) offerings to see how they compare to traditional dedicated server. I picked the professional plan which got me 512 MB of RAM, 20 GB of disk space, and 250 GB of bandwidth running CentOS (RedHat Enterprise Linux 4) on a quad 2GHz CPU box for $90 a month. I have been incredibly happy with the performance of the server, the support team and the overall performance of their network and the results from GrabPERF show it.
I am still continuing to ‘play’ with Amazon’s EC2 (Elastic Compute Cloud) offering to see if it could really become the killer solution that will change the hosting landscape. A dedicated (albeit virtual) machine for $70.00 a month is a really compelling story and if Amazon can back that up with additional offerings where you can geographically distribute your applications in multiple datacenters and still scale up/down with computing capacity as needed, why would you host anywhere else? I know Amazon’s EC2 offer is ‘bare-bones’ on purpose where you have to build your server from scratch; you don’t get a web interface like cpanel or plesk to manage your server instance or help with your server configuration once you get up and running. This has to open up opportunities for VAR’s to offer value on top of the EC2 platform by creating ‘hosting-in-a-box’ service where they will build custom Linux deployments, manage them and offer simple management tools. S3, the Amazon storage service has created a huge marketplace for storage, backup tools, online backup vendors and other niche products. I think ECS is going to do the same for the hosting market.
Almost forget: I am running WordPress 2.0.5 with a ton of plugins that are listed on my colophon page and the real difference maker is WP-Cache with this fix to wp-cache-phase2.php that makes it compatible with WordPress 2.0.x.
I finally upgraded my blog to WordPress 2.0 a few weekend ago and am now finally getting around to blog about it. I had blogged previously about issues I had upgrading my blog software but those issues were related to some MySQL upgrade and version compatibilities. To get around the database issues, I used MySQL Administrator to backup the entire database from the old server and restored it on the new server. Not sure why that worked, but it did and didn’t create any issues. In the past, I had backed up the database using MySQL Administrator and then used the MySQL Query Browser to create the new database and insert the data. I’ve spent a lot of time figuring out the differences between the version of MySQL and the issues I had and will post a lengthy (and boring) blog entry about that in the near future.
After upgrading the database, I upgraded my blog software to WordPress 2.0 and everything worked with the exception of a few minor issues. One of the biggest issues I ran into was an internal rewrite issue where my blog is deployed under /blog and I had a page whose slug was blogs-i-read. WordPress was generating a 404 for that page and I fixed this issue by simply renaming the slug to remove the use of the word blog. This issue is fixed in the latest maintenance release of WordPress which currently happens to be 2.0.1. In addition, I had a problem with the awesome WordPress FeedBurner Plugin that was also related to rewrite rules but Steve Smith had already fixed that problem in the last revision of the plugin.
Since WordPress 2.0 had been working pretty smoothly and 2.0.1 was working in my development area, I applied that today to this site and it worked like a charm. The biggest improvement (besides the bug fixes) appears to be performance. It’s too early to tell if the numbers will hold up but here is a performance chart that shows dramatic improvement in performance since the upgrade.
Chart courtesy of GrabPERF
I hope this performance improvement last as WordPress 2.0 was a lot slower than WordPress 1.5 with the WP-Cache plugin. Incidentally, the WP-Cache plugin has not been upgraded to work with WordPress 2.0 but if this performance holds, who needs WP-Cache 🙂