Looks like TechCrunch is down because the MySQL database server is either down or inaccessible.
Mike really should look into VIP Hosting at WordPress.com.
Looks like TechCrunch is down because the MySQL database server is either down or inaccessible.
Mike really should look into VIP Hosting at WordPress.com.
I apologize as this is a little off-topic for this blog but I wanted to bring as much attention as I could to the Save Baby Gavin website and blog. My wife's cousin Jill has a beautiful little boy named Gavin. Gavin was born on February 23, 2006 with end-stage renal failure, or congenital kidney failure. Gavin was born with only 10-15% function in one of his kidneys, and 0% function in the other. He was admitted to Children's hospital of Wisconsin immediately after his birth and started on peritoneal dialysis. Peritoneal dialysis means that he is hooked up to a machine at home every night for 10 hours to “clean out” his blood. Without a kidney transplant, Gavin will not survive.
We are launching an initial fundraising campaign to help Gavin's family with costs not covered by private insurance. Our goal is to raise $100,000 for Gavin's account, which is managed by the Children's Organ Transplant Association (COTA). COTA is a national, nonprofit 501©3 charity dedicated to helping families raise funds for transplant-related expenses.
Please visit the Save Baby Gavin site and donate anything you can. If you can, please blog about this and link to Gavin’s site to get a wider distribution of this. Thank you.
I first discovered OpenDNS on Chris Pirillio's blog – OpenDNS is a free service that is designed to make your Internet browsing faster, safer and smarter. And guess what - it does that. OpenDNS is essentially a set of massive distributed DNS caches that allow faster name resolution and yet obey the TTL rules for each domain. They have a very fast geographically distributed network of DNS caches that allow for blazingly fast lookup times which allows for faster connections to those sites. The traditional ISP DNS lookup connects to one of the root name servers which in turn send you to the name server for the top-level domain which will then probably get you to the name-server that is hosting the DNS entry for the site you are trying to connect to. OpenDNS skips all of that and return the IP address of the site you are attempting a connection two in a single request.
The safer surfing part comes into play with the phishing filter built into OpenDNS. OpenDNS intercepts connections against known phishing sites, based on network analysis and feeds from other network operators including their new venture PhishTank. PhishTank is a community anti-phishing Web site where anyone can go to submit suspected phishes, track the status of their submissions and help verify others submissions.
The smarter bit comes in the typo-correction feature of OpenDNS. So if you're going to google.com and misspell Google, OpenDNS first attempts to correct the typo and get you to the right site instead of the squatter sites that are just waiting for that misspelling to land you on their site.
I have been using OpenDNS for months now since I first read Chris's blog entry about OpenDNS and have been extremely happy with the free service. Can't beat the price – I can't really tell if my surfing is any faster but cognitively I know it is and that makes me happy. 🙂
Another thing that really stands out about OpenDNS is the service - I've had two occasions where I've contacted support to check on some DNS changes I made to move my domains from one hosting vendor to another and I got an almost immediate response both times. John Roberts, who is the VP of Product Development responded back in minutes to my query on both occasions and helped me by force clearing the cached entries for my domain.
Anyone and everyone can start using OpenDNS to surf smarter, faster and safer. Check out their Getting Started page for more information on how to change your router or computer DNS settings to start using OpenDNS.
Just found the Pictobrowser from Thomas Hawk's blog and it is an amazing way to embed pictures in your blog. Pictobrowser is a simple widget that allows you to display sets of pictures from Flickr directly on your site or blog using Flash and the users never leave your site. Pictobrowser is the brainchild of Diego Bauducco. Check out a sample below from one of my sets:
You know the old routine – You get a new machine and then you spend weeks looking for and installing all the applications, tools, utilities, etc that you had on your old computer that made you so productive. There is always that utility that you use once in a while but you just can’t seem to find it.
I recently bought a new computer and decided to make a list of all the software I installed on the new computer so that I’m ready to do this again for my next machine. I wish I had discovered Belarc Advisor before I rebuilt my old desktop as a Linux (Ubuntu) desktop. So here is a fairly complete list of what’s installed on my machine and if you see something that I should have, please leave me a comment:
The Essentials
Development
Audio, Video & Graphics
Browsers & Extensions
Utilities
Geek Smithology – The State of Enterprise Ruby
The real takeaway is that if you truly believe in using the best tool for the job, then you will be using Ruby at some point in the future.
StorageMojo – Mission Impossible: Managing Amazon’s Datacenter, Pt I
Unlike Goole�s clean sheet approach to creating internet-class infrastructure, Amazon has made every mistake in the book. The original site was one hairball, database, OLTP and web server all on one system
Google Research Publication: BigTable
Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers.
Amazon launched their latest offering entitled Unbox Video which is essentially a video (TV shows, movies, etc) download to buy or rent service. Rumor is that Amazon rushed this out on Friday, September 8th to beat some super secret announcement coming from Apple later next week.
The Unbox video service doesn’t offer anything new and is in fact more of the same. I can buy a movie but I can't burn it onto a DVD to watch it on my TV. Media center PC's are exceptions if you have a Media Center PC hooked up to your TV or are using something like Media Center Extender to broadcast the output to a TV. The videos that you download from Amazon are DRM'd Windows Media (WMV) files and so you cannot put in on your video iPod. Apple essentially works the same way with their DRM but you since they control the mobile music and video player market; it's less of an issue. I'm guessing you've probably already got the sense that Unbox video is only for Windows and you would be right. No MAC or Linux support at this time.
There are 2 new concepts introduced that set Amazon Unbox video apart from iTunes and other similar services. To my knowledge, Amazon is the only one that will let you rent a movie by downloading it to your computer. You have 30 days to watch it and 24 hrs to complete watching it before the video is automatically deleted. I know Netflix is working on a download-n-rent but I don't believe that's available at this moment. Please correct me if I'm wrong.
Another concept that I consider a move in the right direction is the concept of the Media Library. Everything you buy or rent is in your Media Library on Amazon and so you can buy an item on 1 machine and download to watch it on another registered machine. Both machines must have the Unbox video player and be registered on Amazon as your machines. As an experiment, I bought a TV show on my laptop and downloaded it. I then copied the video over to my desktop and dropped it the directory where Amazon would expect its videos to reside. The Unbox player didn't see and I wasn't able to play it directly without downloading it from my Media Library to the desktop. The video player was smart enough to realize that the file was already there and started playing in seconds after it marked the video as downloaded on the desktop. The subtle point here is that if your computer crashes and you lose your purchased content, you will be able to download it from your Amazon Media Library. It would be interesting for Amazon to make this a paid-service and use their S3 service to automatically back-up your purchased content for you.
The video quality of the TV shows that I purchased was good and the sound was fine as well. I guess a true test would be to buy a widescreen movie and see if the Dolby 5.1 surround-sound works as advertised. All in all, the video service is nice but nothing earth shattering and left me wanting more. Another major issue with this offering is the licensing agreement that you agree to as part of the software installation and it requires you to apply all patches from Amazon whether you want them or not and Amazon can delete your movies if you uninstall their video player. Yikes! Doesn't like a lot like that Amazon we know and love, does it? More information at the uninnovate blog and CNet.
Why is it so hard to come up with a video service where I can buy a movie and burn it onto a DVD to watch it on my TV? I hate DRM but I understand the need to protect copyrights but there has to be a way to protect content and allow me as the purchaser fair-use of that purchased piece of content. I guess the key here is purchase – I am paying for something. Don't put limitations on my personal usage of that. Anyone that can produce a service that allows that will eat everyone's lunch. I hope Apple or Netflix or YouTube or dozen of the other YouTube clones/wannabe's out there come up with a way to legally distribute video content but allow the purchaser some flexibility on where they can view that piece of content. It would also be great if they could include some future-proofing on your purchase and so if you bought 2nd season of The Office with some proprietary DRM, you could exchange or upgrade it for any future format that's different without having to repurchase the movie all over again. Ah to dream…..
Now that the blogosphere has settled down after the launch of Amazon’s EC2 beta program, I figured it was time to talk about something that I found missing in all the blogs and online discussions.
Before we get to the Enterprise implication of EC2 and S3, I should probably let the people that haven’t heard about them a little background. Amazon S3 is a service that launched a few months ago that provides a simple web services interface to store and retrieve any amount of data, at any time, from anywhere on the web. It gives you access to a highly scalable, reliable, fast data storage infrastructure without spending the millions it would take to create a redundant, fault-tolerant SAN environment. Amazon Elastic Compute Cloud (Amazon EC2) is a new service that launched last week that finally realizes the promise of grid computing for me.
Amazon EC2 gives you access to a virtual computing environment in the cloud. Your applications run on a “virtual CPU”, the equivalent of a 1.7 GHz Xeon processor with 1.75 GB of RAM, 160 GB of local disk and 250 Mb/second of network bandwidth. You pay 10 cents per hour (per instance) which would amount to about $72 per month. You can provision one, hundreds or even thousands of servers or grow capacity as needed as your application grows. Can you imagine being able to provision 1, 2 or 500 additional servers in minutes for your application programmatically?
To setup your instance, Amazon gives you tools to create your own Amazon Machine Image (AMI). An AMI is simply a packaged-up environment that includes all the necessary bits to set up and boot your instance (Currently Fedora Core 3 and 4 systems based on the Linux 2.6 kernel are explicitly supported, although any Linux distribution which runs on this kernel version should work.) that can include a webserver, database server, etc. Once you create your AMI, you upload it to Amazon S3 and your instance is ready to go. You can target that image to multiple instances or build out your web tier on a set of machine, your middle tier on another set and your database on another set as well. Since this is essentially virtualized Linux, any applications that work on Linux should work here including Java applications. Amazon EC2 is a closed-beta program and I haven’t gotten access to the beta yet but Edwin Ong over at castblog has a nice review with some great screenshots that will demonstrate the potential here.
Now that you have the background on S3 and EC2, you can just imagine the potential for startups. Instead of having to pay for terabytes of storage if you are the next Flickr or YouTube killer, you can simply use S3 for all your storage needs and have a redundant, encrypted file system that’s fairly bulletproof and grows with you. Instead of having to forecast your storage needs, you can focus on other real tangible problems. EC2 now provides the same on the computing side of the house. Not sure how many dedicated managed servers to get at your ISP? Well, just use EC2 and grow your farm of dedicated virtual boxes you as need them. And so if Digg, Techmeme, Reddit or TechCrunch or the meme of your choice is sending you millions of hits, add a few virtual servers to support your application and then scale back as traffic dissipates.
The advantages of this virtual platform are pretty obvious but I see major potential of this model for the enterprises. Take any of your Fortune 1000 companies or millions of other smaller companies than that. Most of them are required either by regulation or competitive landscape to have BCP (business continuity) plans, especially if they are in a highly-regulated industry like banking/finance, insurance, health-care etc. So what if you could build out a virtual BCP environment where you test, build and deploy your applications on a few EC2 instances to validate your applications and scale up by adding additional instances if you really need to failover your applications. The traditional model of BCP is building out another datacenter or leasing space (colocation) in an established datacenter center that meets your power, telecom/network, security and service needs which costs anywhere from several thousands to millions depending on the scale. What if you could completely eliminate that cost by using Amazon’s virtual computing grid? What if you could deploy all of your applications that are critical to running the business in case of a disaster on a virtual cluster of servers without paying the cost of a full-physical build-out? The question of privacy, data encryption and access controls would need to be flushed out but Amazon could potentially be the solution for companies that are struggling to justify exorbitant BCP costs.
I think S3 has already changed the competitive landscape and realized the dream of the virtual storage network and EC2 is going to be type of disruptive change that will turn the market on its head. I cannot wait to see the tools that are going to pop around the EC2 space to make the creation and deployment of your virtual server easier than the current command line process. S3 is a great example with some great applications that have popped up to take advantage of S3 and my current favorite is JungleDisk. S3 over WebDAV � brilliant. As an aside, I am working on my own version of an AJAX enabled S3 web application but it’s more for personal use that will end up being used as tutorial-ware more than anything. It’s interesting to see the mini-industry that has popped around S3 and EC2 will draw even greater interest. It will only be a matter of time before you will have your vendors offering one-click setup of your Amazon EC2 server preloaded with the Linux flavor of your choice along with applications you need. I can also see a new group of hosting providers jumping as VAR vendors to resell the many virtual instances they pay for as shared host servers slices. The potential is limitless – Now I just need to get into the beta so I can see it for real.
Is there a known issue with Subversion and proxy servers? I guess I should have run into this years ago but I don’t ever remember having an issue. I just got a new laptop with a fresh install of Windows XP and connection out to the Internet is using a Squid proxy server. When I connect to my Subversion server using TortoiseSVN (1.4.0-RC1), I get a following error message:
svn: REPORT request failed on '/svn/!svn/vcc/default'
svn: REPORT of '/svn/!svn/vcc/default': 400 Bad Request (http://www.vinnycarpenter.com)
I know the issues isn’t with TortoiseSVN as I get errors connecting using command line tools, IntelliJ IDEA and Eclipse with the Subclipse plugin. Is this a known issue? The Subversion FAQ doesn’t really have any answers and the recommendation of adding extension_methods in the FAQ didn’t work. Anyone else run into this?
BEA is hosting a webinar on September 20, which will discuss the existing integration points between WebLogic Server and Spring and what is coming down the pipe for Spring 2.0, WebLogic Server 9.2 and beyond. The webinar will also discuss new technologies introduced with WebLogic 9.2 that support the use of the Spring Framework and how they work with Spring to make your development easier. The webinar will be hosted by Andy Piper, who worked with Rod Johnson and crew to implement the initial Spring support in WebLogic and the MedRec example to illustrate best practices in developing Spring application under WebLogic. The new version of MedRec that’s Spring enabled was re-architected from an EJB-based architecture to a Spring-based architecture for the handling of transactions, data access, and remoting.