Entrepreneurial Leadership and Management . . . and Other Stuff

RSS
30
Jun

Gadget Review–Thinkpad X220

This is my third Thinkpad – first from IBM and now Lenovo. They have been my laptop of choice for as long as I can remember. An X40, then an X60s and now this new baby. Not as stylish as those unibody Macs that almost everyone I know uses these days, but I’ll take function over form any day (well, mostly – although my kids strongly disagree, I’m not entirely without style). These computers have been rock solid over the years and I’ve been able to continuously extend their lives, upgrading batteries, disks, memory and versions of Windows – eeking out more from these machines than IBM and Lenovo probably ever intended. They have been no-muss, no-fuss workhorses and I fully expect the same from the X220.

The configuration I purchased isn’t even all decked out. I selected the options that best met my needs – Sandy Bridge i5, 2.5GHz, 6GB of memory, 128GB SSD, 1366X768 IPS 16X9 12.5” display and Windows 7 64-Bit (oh yeah, baby). While that’s still a formidable laptop setup, faster processors, more memory and bigger disks are available to drive this thing faster and further.

CPU-Z-X220

The system boots fast and resumes from standby instantly. The screen is really sharp and the computer executes everything quickly. Best of all, battery life is completely outstanding. I can pound on this things for 5-6 hours without refueling. If I’m just watching videos, it’s a couple hours more than that – excellent for long plane rides.  I’ve stopped carrying my iPad. At 1.0” thick (there is another 0.25” bump where the battery is) and weighing in at about 3 pounds, it’s light and goes almost anywhere my iPad went and I like the keyboard way better.

As with most things, not all is perfect. The machine comes with IBM/Lenovo’s classic TrackPoint device, which I’ve always loved. It also comes with a touchpad. You can set the machine to recognize one or the other or both. Problem is, the touchpad sorta sucks. It doesn’t track consistently and trying to use it alongside the TrackPoint requires manual dexterity that genetics hasn’t quite yet refined. So, I have the touch pad turned off. The other problem is with the display. While it’s bright and sharp and colors are superbly reproduced, 768 pixels filling the, roughly, 6.25” screen height just doesn’t cut it. As much as media wants to go widescreen, productivity apps still long for good ol’ 4:3. Or, at least a physically taller display so that what’s displayed is easier to read. There’s just not enough vertical information displayed when trying to get real work done or even just browsing the web.

Do these problems detract from the experience? Perhaps. Everyone needs to decide for themselves. For me, the screen height thing keeps this from being a perfect, do-everything computing device, but it’s just not enough of an issue to spoil all the advantages that it offers. I suggest you take a look at one before buying to judge for yourself, though. It may be a more substantial issue for you.

The battery life on its own makes this computer terrific. Add to that the speed, great keyboard, bright display, Windows 7 and upgradeability and I think this will be my laptop for many years to come. Even if I have to do a lot of vertical scrolling.

 June 30th, 2011  
 Will  
 Computers, Gadgets  
   
 Comments Off on Gadget Review–Thinkpad X220
02
Apr

Windows Server 2003 (or Windows Home Server) Account Lockout

I had a long power outage yesterday that caused my servers to shut down. While they are on UPS’s, 5 hours or so without power ran ‘em dry. When I tried to hook my desktops up to them after the outage, I couldn’t access the shares one of the servers. I kept getting an error telling me that my account was locked out. When I used Remote Desktop to login to the server (with the administrator’s account), the accounts were indeed, locked.

After playing around for a while and searching for a solution on the net, I ran into a reference to a solution to a similar problem that was whacky enough to give a try. As it turns out, the clock on my server had not updated itself for some reason and, in fact, had been reset to some date in 2007. Once I updated the clock on the server to the correct date and time and reset the lock status of the account, I had no more difficulties. Apparently, this is a security feature. I just wish it had been a bit easier to diagnose.

 April 2nd, 2011  
 Will  
 Computers  
   
 2 Comments
28
Feb

The History of Computers [ala Techking]

This is great (thanks Techking for creating it and @djilk for pointing it out). A humbling look at where computers started and how they advanced. There are many missing, including some of the more recent super-duper computers produced to work on some nasty computational problems and a couple built to become chess-masters and Jeopardy winners, but the key ones are pretty much all here.

Computers-Chronological-Timeline-580px

Check out other cool infographics on Techking.

 February 28th, 2011  
 Will  
 Computers  
   
 1 Comment
10
Feb

Can Apple Take on the World and Win?

It’s difficult not to respect all that Apple has achieved both as a computer company and as a consumer electronics crack dealer. They have great products and hugely loyal fans customers. Their terrific execution has allowed them to buck the trend of openness by providing what a wide swath of consumers want – a solution that, more often than many others, just works and looks great doing it. Part of the reason that the company has been able to do this is that they haven’t gone it alone. Apple moved from completely proprietary hardware and operating systems to defacto standards (at least at their core, adopting Intel processors and Unix); Parallels/VMWare have opened the Mac up to popular Windows apps; Firefox is the Mac’s primary window to the web world; Adobe, makes sure that Macs have access to the most widely used document and photo formats; and Google inclusion makes sure that Mac users have top notch access to the search giant’s internet tentacles. Apple has wisely leveraged what’s available in the market so they don’t have to take on the entire world at once.

But not so much anymore. It seems that Steve Jobs and Co. have expanded their battlefield beyond just Redmond to the folks at Adobe (Flash, who needs it? Acrobat, we can do that, Lightroom, nah, we have Aperture), Intel (through Apple’s acquisition of PA Semi), Amazon (eBooks, iTunes) and, especially, Google. That’s a lot of fronts to do battle on. Good, aggressive business practice . . . possibly. Hubris . . . likely. While small battles have been brewing for a while with Apple supplying applications that compete on the fringes with several of these players and some of these “partners” pushing onto Apple’s turf, there hasn’t previously been an all-out war. The question is, can Apple maintain its success going it alone? They’re going to have to if they’re going to “go to the mattresses” with all the big guys they have relied on in the past.

A big test will happen this year with tablets. The iPad (the iPhone XXL), will have to rely on the strength of its base of iPhone apps to differentiate it as we will be deluged with a tidal wave of new tablet offerings from a variety of vendors. We’ll see multiple operating systems housed in hardware taking many shapes and forms. Some of these will be strongly supported by Google and will leverage a broad array of Google services, technologies and overall openness. Some will leverage the economies of scale of large PC production to create lower cost offerings with more features. It’ll also be interesting to see what Amazon does at it defends its ebook turf.

I’m by no means saying that the king is going to be dethroned anytime soon, but I do believe that it’s one thing to flank your competition by being different and another to attack frontally going it alone. As a consumer of all the crap these guys produce, I’m loving sitting on the sidelines watching this melee. In the end, it’ll just mean that I get more, better toys. To that end, I’m fully in Apple’s corner for once.

Reblog this post [with Zemanta]
 February 10th, 2010  
 Will  
 Computers, General Business  
   
 4 Comments
01
Feb

Build Platforms on Platforms

Being a software guy myself, I often find that I dig a little deeper into the successes and failures of the software-oriented startups that I work with than I do with the non-software oriented ones.  When I do, I suppose that I shouldn’t be surprised, although I routinely am, at how often I come across some very consistent and basic technical errors that are made by these companies.  Chief among these is the lack of thorough thinking about the architecture of the end product prior to the start of coding.  It’s, of course, natural to start hammering out code as fast as possible in order to get a product to market but, inevitably, the Piper needs to get paid and fundamental problems with the architecture will eventually require a wide-spread rewrite of the system or, even worse, will be a serious resource drain and time sink to in every future release.

You’ve probably read dozens of books that have discussed the importance and value of planning and how time spent in architecting a system is a drop in the bucket compared to the time it saves on the back end.  I neither have the skills nor the eloquence to drive that point home any better.  What I’d like to do, though, is to present a high-level view of how you might think about the architecture of your product so that it provides a framework for you to make rapid changes to the application and makes it easy for others (partners, customers, etc.) to extend the product in ways you may not have considered.

There is nothing revolutionary here.  Let’s just call it a reminder that you will end up rewriting your application or, at least, its framework, in the future if you don’t adopt something like this early on.  You may not see it yet, but like I’ve already said, that rewrite is going to be very expensive and painful and will ultimately cost you customers, competitive advantage and money.

Architecture-3

The idea here is that there are are two programming interfaces.  One separating you’re application from your core libraries or base layer of functions and another separating your application, as well as the lower-level programming interface from the outside world.  The lower level, base programming interface, allows you to build an application virtually independent of the core functionality of the end product.  Architected this way, you can build and test the application and the base code separately and make incremental changes to each part far easier.  In fact, one can be changed without affecting the other as long as the base programming interface remains the same (it needs to be well thought out to start with, of course).

The higher-level programming interface gives you the power to add functionality to your product quickly, using the code in the base programming interface as well as code in the application layer.  Using the application programming interface, you can prototype new functions rapidly and get quick fixes for bugs to users faster.  Perhaps even more importantly, it enables easy access to most of the guts of your system to partners and customers so that they can extend it as they see fit.  This access can be provided without having to publish hooks to the internals of your core system and exposing a boatload of potential problems that foreign calls to those components can create.  If you’d like, though, you can also expose some of that base functionality to the high-level API as is shown in the “optional” architecture slice in the image above.

Simple, yes.  It requires more work up front – both in planning and in coding – but with such an architecture, you’ll be able to roll out new functionality quickly and to fix mistakes as fast as you find them (well, almost).  Ultimately, you’ll get the functionality your customers want into their hands faster than if you hadn’t adopted such a system.  You’ll also be able to continue to roll out enhanced and improved functionality without getting bogged down with thinking about an architecture rewrite or with a huge backlog of nasty bug fixes.

The anxiety about getting your product to market will lead you to think that hacking together a system and refining it later is the way to go.  Virtually always, this is a mistake.  Speed is of the essence, but only the speed which you can deliver sustainable, quality product that continuously stays ahead of the competition.  Look before you leap, it’ll make life so much easier.

 February 1st, 2010  
 Will  
 Computers, Management, Software, Startups  
   
 Comments Off on Build Platforms on Platforms
17
Jun

Livin’ in the Cloud

When it comes to my data, I’m a suspenders and belt kinda’ guy.  It can’t be in too many places or have too many layers of security.  As with investing one’s hard-earned cash, diversification is critical to success.  As such, I have loads of internal backup and security methods that are part of my routine.  I ghost a copy of my primary drive in my desktop to an auxiliary drive inside the same machine; I have a Windows Home Server in my house which does a differential backup of my files every few days; and I even sync critical files with a USB memory stick that I can take with me if I need/want to.  OK, maybe that’s a couple of sets of suspenders and a belt or two.  What can I say?

I’ve been thinking about also syncing and backing up some data to the cloud over the last six months and took the plunge a couple of months ago.  I’ve thought about what I really want out of cloud storage and have tried several offerings.  I’ll talk about these specifically, but first, a little background on my thinking and what I was looking for.

It seems to me, the when it comes to the storage of data in the cloud, as opposed to the actually use of it, there are three general types of storage solutions – raw file storage, synced/backup file storage, and content-specific storage.  Raw cloud-based file storage is simply disk space somewhere on the internet that you can do whatever you want with (think Amazon S3).  Synced storage is similar, but it’s usually set up specifically to facilitate the synchronization or backup of data between a PC and disk space similarly elsewhere on the net.  Content-specific storage is specifically set up for particular data types like email, photos, music, etc.

When cloud storage is segmented this way, one quickly realizes that all email users have been cloud storage consumers for a while.  Whether you use a basic POP or IMAP server for your email or something heavier duty like Exchange or Notes, your email has been in the cloud at least for some period of time.  So, you, like me, are already likely a user of cloud storage.  This rationalization helped me feel more comfortable about moving my data to someplace unknown.

In the end, I found I was most interested in having storage for backups and syncing to keep multiple computers up to date.  Services for the latter often assume the former – a cloud-based synced storage provider often has nice backup capabilities as well.  After all, backup is the same storage mechanism without the sync function.  I also wanted to expand my specialized storage to include my large photo collection.  For this, I wanted a photo-specific site that offered galleries and photo management.  These, of course, are not offered by the raw or synced backup folks.

While I hardly tried all services available, I did try a few including, Amazon S3, Microsoft’s Skydrive, Microsoft’s Live Mesh, Syncplicity, KeepVault, SmugMug and Flickr.  Here are my thoughts:

  • Amazon S3 – S3 is simply raw storage and it lies underneath many of the other, higher-level cloud storage services out there.  There’s no high level interface per se and, as it states clearly on the Amazon AWS site, it’s “intentionally built with a minimal feature set.”  At $0.15/GB/Month it isn’t even that cheap compared to some other services – 200GB of backup costs $360.  Oh yeah, I can do basic math . . .
  • SkyDrive – It’s “integrated” with Microsoft’s unbelievingly confusing array of Windows Live services.  I consider myself pretty knowledgeable about Microsoft stuff, but this Windows Live thing is hard to understand.  It works nicely, but there isn’t any client on the PC side, really.  Uploading files is done a handful at a time and there is no syncing.  It’s really about sharing files and doesn’t offer any automated backup or syncing.  Even for bulk storage, it’s too difficult to use.  They offer 25GB of storage for free. 
  • Live Mesh – I like Live Mesh a lot.  Live Mesh is all about synchronization between multiple machines, including Macs (beta) and mobile phones (“soon”) as well as online through a web browser.  It works totally behind the scenes, is fast and has the best reporting about what it did and what it’s doing of any service I tried.  It also offers features like accessing the desktop of a Live Mesh-connected computer and a nice chatting and feedback facility for sharing and commenting on shared documents.  My only problem with Live Mesh was the level of file granularity for syncing.  Live Mesh only understand directories, not individual files.  Sometimes, you just don’t want the entire directory synced.  The initial 5GB of storage is free.  It’s still in beta.
  • Syncplicity – It’s my favorite of all the sync/backup solutions so far.  It makes assumptions about the directories you want to sync or backup and adding different ones is a tad confusing, but once you get it, it’s all a piece of cake.  The reporting on what it’s doing isn’t as nice as Live Mesh, but it’s just as seamless and it’s pretty fast (like Live Mesh).  Unlike Live Mesh, individual files can be added or removed from a sync tree by right-clicking them (Windows) and just specifying whether or not the file should be included or not.  Also, it’s easy to specify whether you want files to be synced with other machines or just backed up.  I’m still not completely content with how Syncplicity deals with conflicts.  No data is ever lost, but it can be duplicated leaving copies scattered in your directories.  Also, I had one really nasty problem with the service.  The Syncplicity client was sucking up 10%-50% of the CPU time on my machine – all the time.  I sent messages to Syncplicity support and complained about the problem on their forum.  Nothing, zero, no response for weeks.  In fact, to this day, I’ve gotten no response.  I eventually figured the problem out myself.  A TrueCrypt encrypted volume in a directory on my machine was screwing the client up.  Once removed from the sync tree, the problem was gone.  Just horrible service.  There is a free 2GB trial and then $99/year for the first 100GB.  This is a 50% discount offer that’s been running for a while.
  • KeepVault – I tried this out because it integrates nicely with the Windows Home Server Console.  I’m using it specifically to back up my server – no desktops included and no synchronization, just backup.  It seems to work well, but the initial backup of 150GB of data took about 16 days even when I was not throttling the speed of the connection (a nice option for a server, BTW).  Additionally, the backup process stalled about 20 times during the initial backup.  Now that it’s only dealing with a handful of files, albeit big ones, at a time, it seems to be working well.  Jury’s still out.  No trial, but a 30-day money-back guarantee.  $180 for 200GB of backup.
  • SmugMug – I have 42GB of photos on my server which represent the most cherished of all data I have.  At the very least, I needed to backup these files to another physical location.  At best, it would be nice if the data could be organized and viewed from that location as well.  I looked at many sites, including Flickr (the relative standard in this space) and chose SmugMug.  The difference is that SmugMug is aimed at photographers who at least think there is some level of professionalism in their shots.  SmugMug’s pages are totally customizable and they understand not to mess with pictures being uploaded (unless you want them to).  It’s about the gallery first and about sharing second.  Just what I wanted – I’ve never learned how to share well 🙂

There are loads of other services out there including some I considered, but decided not to try on this first pass – DropBox, ZumoDrive, iDrive, Soonr, Jungle Disk, etc.  In general, I’m feeling better about having my data somewhere else.  The process is easy and, as far as I can tell, secure.  Syncing can certainly get better, though, and when there’s a failure, it’s very hard to debug, even if you can detect that it happened in the first place.  Sometimes, as with any backup, you don’t know there was a problem until an emergency happens and you really need to restore a file.  Not painless, but fairly low barriers to experience.  Come on in, the water’s fine . . . so far.

Reblog this post [with Zemanta]
 June 17th, 2009  
 Will  
 Computers, Photography, Software  
   
 26 Comments
08
Apr

Lenovo X60s and Windows 7

Win 7 Full Screen I’ve been toying with the idea of buying a netbook to use as a cross between my iPhone and my desktop.  My trusty Lenovo X60s has filled that role for years, but the combination of my crack-addiction-like need for new technology combined with the allure of a netbook’s lighter weight has been my siren song for the new platform. Not that at 3.4lbs (with six-cell battery) the X60s is a heavyweight or anything, but, as everyone knows, you can never be too thin.  Even if you’re a computer.

Since I was planning on putting the beta of Windows 7 on the netbook anyway, last night I took a deep breath, burned down the X60s, reformatted the drive and installed Windows 7 on it just to see what it would be like.  Not only did the conversion go way better than planned, but Windows 7 breathed new life into my trusty steed.  Windows 7 is amazingly stable for a beta and a good deal faster on the machine than Vista Business and XP were.  The new OS consumes fewer resources in general and memory in particular.  Yes, even when taking into account a fresh installation.  Additionally, I like the user interface changes (a lot!) which let you do much more with less real estate – critical on a small laptop.

I was a bit worried about drivers and overall compatibility issues.  I had none.  Windows update downloaded all the Lenovo and X60s-specific drivers I needed.  I only had two issues when all was said and done.

  1. A couple of tools I have for poking around the file system didn’t work.  This included a couple of desktop gadgets and a file browser.  I found new gadgets changed for Windows 7 and got around the file browser problem by changing some settings in the respective programs.
  2. I had a load of trouble connecting Outlook 2007 to my Exchange Server (2007).  According to my Exchange provider, there are some differences (I can’t imagine why an OS change would mandate these).  Again, after poking around a lot, I was able to get Exchange to work.  If you’re having similar problems, search for how to connect Windows Mobile with Exchange.  Use the same user and password you would use with Windows Mobile.

I’m back to loving my X60s and plan on keeping it until it drops dead.  I have it loaded with applications and data and it still has plenty-o-room to breath.  Sure, I would like it to be about 30% lighter, but I don’t have to be Schwartzenegger to carry it as is.  I think Windows 7 will just get better with it’s release later this year, too.  There’s probably a zillion lines of debugging code in this version that will be removed for the final release.  That alone should make it even faster.

Since some Apple fanboi is going to flame in the comments about how all the good UI stuff in Windows 7 was stolen from OSX, let me say up front, you’re probably, mostly right.  Although, not completely.  Personally, I’ll take openess and thus, broad application availability over locked-down any day, even at the expense of some quirks and a less-than-optimal UI – it’s the American way.  ‘Nuff said.

 April 8th, 2009  
 Will  
 Computers  
   
 13 Comments
09
Jul

Accessing the Buffalo Terastation from Vista

I was an early adopter of the Terastation when it was released a few years ago.  The Terastation is a pretty typical (well, now pretty typical) SOHO-type NAS that is a fairly inexpensive solution to getting loads-o-disk space on your local network.  It’s not screamingly fast, but it’s got loads of features including a good web interface, some basic security, gigabit networking, multiple RAID configurations for its four drives, including RAID 5 and a built in media server that works very well with Buffalo’s excellent LinkTheater media players.

Having all this happiness with the solution only made for that much more dismay when I discovered that Vista doesn’t play nicely with the Terastation.  For the most part, machines running Vista can’t see what’s on shared folders hosted on the Buffalo NAS.  Like most problems, though, I was able to find a resolution to this issue by searching the web.  It’s always a good thing when you realize that no matter how much of an early adopter of technology you are, they is always someone who has blazed the trail ahead of you.

The bottom line is that Vista puts security in front of functionality and all you have to do to get the Terastation to work is mildly circumvent some of that protection.  I found the very nicely described solution on the Scale|Free blog.  It’s pretty easy to implement and I’m sure applies to other NAS solutions that may not have yet been updated to play nice with Vista.

  • Run the Local Security Policy app – secpol.msc
  • Go to Local Policies | Security Options and choose the “Network Security: LAN Manager Authentican Level” item
  • Set it to “Send LM & NLTM, use NTMLv2 session if negotiated”

Basically, Vista is set up to use NTMLv2 only.  All this change is doing is adding the old LM security protocol back into the mix while still using the newer protocol when it’s called for.

Works like a charm.

 July 9th, 2007  
 Will  
 Computers, How To  
   
 20 Comments
09
Jun

In Memory of Digital Equipment Corporation

DEC-BadgeI mentioned that I worked for DEC to someone recently and they had no idea what I was talking about.  Granted, the person was young, but he was an adult.  Funny how the second largest computer company in the world in its time, and the inventor of the mini-computer could be so quickly forgotten. When I joined DEC in 1981, revenues were well below the peak of the roughly $14B they would hit in 1989.  In fact, I think they may have been about $3B.  Digital was still basking in the glory of the VAX, which was released in the late 70s, and was totally disrupting the old mainframe business dominated by IBM.

DEC’s headquarters was in an old woolen mill in Maynard, Massachusetts, not some shiny steel and glass structure in Silicon Valley.  That is part of what made it cool to work there.  In its own way, the place was sort of the Googleplex of the time and the company was innovative, fast-moving and a blast to work for.

Like any company, DEC did a lot of stupid things.  Let’s face it, the company doesn’t exist any more which probably means it made at least a few key mistakes.  One thing it did profoundly well, though, was recruit new talent.  The company was full of smart people that attracted other smart people.  The company also made a committed effort at making its computers available on college campuses around the world.  In a time where university computing meant punch cards and big black boxes, DEC hooked young engineers and scientists with interactive computing.  For those of you who can’t imagine life without a GUI, you probably have trouble understanding the magnitude of this change.  It was huge and made loads of young people (like me) want to work for the company driving this sea change.

This marketing fed the company’s almost insatiable need to hire and grow in the 80s.  In fact, the company grew almost uncontrollably (there’s one of those key mistakes).  As such, there never seemed to be many good managers around (I was fortunate to work for a good one – thanks, Alain).  For renegade employees willing to work hard, the environment was unreal – all the resources you could want at your disposal.  For those who wanted to slack off, though, there was always a place to hide.  We called it, “retirement for the young.”

The group that I worked in was an especially good one.  It was in DEC’s semiconductor engineering facility in Hudson, MA.  In those days, DEC was pouring money into semiconductor physics, manufacturing and software tools for the development of processors.  I was fortunate enough to be in a small team of really smart people that always kept the bar high.  The group created some incredible stuff back then, in fact, some of the underlying technology we created is still in use today in one form or another.

When I was hired, I was a software guy who had just been through two failed startups – one because of someone else’s mistakes, one because of my own – I’m a slow learner.  Within a year or so at DEC, though, I was running the internal chip design course and designing my own microprocessor (the rectangle on the badge above is one of the chips).  That was the kind of huge opportunity that was available in the company if you wanted it.  It was easier back then to make such a domain leap, of course – wire widths weren’t measured in wavelengths of light, but with a tape measure.  You could practically draw the physical layout with a crayon.  The important thing was that someone like me had the chance to make that kind of move.  It just doesn’t happen often today.

I left DEC in 1984 to start Viewlogic Systems with four other guys that I worked with.  Viewlogic was a big success and a great experience, but it was really difficult leaving DEC.  Many of the things I learned there influenced what I did when building new companies – mostly positive things, I think.  I’m sure it’s just my fond memories of the place, but I think that DEC had a huge impact on how we look at and run technical businesses today.  It’s too bad that it isn’t remembered more (or at all) for the hugely positive affect it has had on many of today’s leading technology companies.  It was even one of the first venture-backed companies in the US having taken $70,000 to start up in 1957.

I can only hope that some day people who worked at one of the companies that I have been responsible for will have similar positive memories of their experiences while employed there.  If so, much of it will have been influenced by my own great experience at the once great company, DEC.

 June 9th, 2007  
 Will  
 Computers, Misc Thoughts  
   
 11 Comments
09
Jun

A Crash-Course in VPS

I mentioned last week that I was moving this blog off of an internal server to an externally-sourced one at 1&1.  Further, that I chose to use a VPS, or Virtual Private Server, for the installation.  For servers that don’t require loads of disk space, CPU power or memory, a virtual server makes loads of sense.  They are a lot cheaper than a dedicated server while giving the owner the same level of control (complete root access).  The options with such a server obvious exceed those of a simple web host that just allows its owner to control the pages of a web site and, perhaps, a database behind it.

My goal is to use the server to host multiple web sites, including this blog, serve a variety of files – both public and private, run an IMAP/POP/SMTP mail server (Merak), and a couple of background processing programs of my own creation (therefore, fat and inefficient).  None of these processes is very CPU critical so I wasn’t too concerned with the overall CPU load on the VPS.  Also, after calculating the amount of disk space I needed, I found that I could easily last for a while on 10GB.  This seems small, but the mail server compresses its files and I don’t need this server to serve audio/video (I have another server for that) reducing the disk space need substantially.  I was a bit concerned about memory, but it seemed to me that all these applications could run in the 300MB space offered by 1&1’s lowest-end VPS offering.  It’s in this last decision that I found I was very wrong.

Especially since 1&1 VPS’ are running a 64-bit version of Windows Server 2003, memory space got consumed pretty fast.  If I was sticking with just the web and file server functions, I could have squeaked by, but my own applications plus the various processes that are part of the mail server kept me right at the upper bound of the virtual memory space that I had.  This might even be acceptable other than the fact that the virtual machine didn’t like it when it ran out of memory and often crashed. 

So, I upgraded the server to a higher-end VPS offering from 1&1, which I’m afraid required a complete rebuild.  The new server offers more disk space, more CPU and more memory, although not a ton more at 500MB.  This gives my applications plenty of breathing room though.  I finally completed the transfer of everything this morning and it all appears to be running OK.  We’ll see.

If you’re looking at a virtual server solution, look hard at your memory requirements.  Hunting around at all the service providers out there (there are many), I found that not all of them are up front about the memory space available.  1&1 buried this factor more than most – they were clear about the disk space and CPU levels available, but I had to really hunt to get memory info.  This is especially bad since it ends up being the most important factor.

FWIW, you can also get a VPS from 1&1 and others that comes with Linux (mostly Fedora Core).  My guess is that you can build a leaner server with a Linux base so that may be a better option for you.

Technorati tags: ,
 June 9th, 2007  
 Will  
 Computers, Misc Thoughts  
   
 2 Comments