Entrepreneurial Leadership and Management . . . and Other Stuff


Gadget Review–Windows Phone 7

Yeah, I’m pretty late to the game here. That said, I bet I’ve played with this (thanks Shawn!) before most of the people I know or who read this blog. In fact, I’d be willing to bet that most of the people I know never intend to ever giving it a try (so why are you still reading?). Well, if you’re one of those people, you owe it to yourself to at least give Windows Phone 7 fifteen minutes of playtime at your favorite phone store or, even better, a 15-30 day trial from your favorite carrier. It’s still a 1.x phone and, therefore, is missing some of the polish and completeness of it’s iPhone and Android competition, but I think it’s a fresh, cool approach to the smartphone category.

As a current Android user and a former (reformed?) iPhone user, I have a pretty good feel for what I use the phone for and what’s most important to me. I don’t play games on my phone. It’s a communication device primarily, a synced data access device (think Evernote and Smugphoto) secondarily and web browsing device tertiarily (did I just make that up?). I run a handful of apps that don’t fit into these categories, but they’re icing on the cake rather than the cake itself. In that light, I found Windows Phone 7 surprisingly satisfying and quite a bit different from the other platforms.

The first thing you notice is how fast the UI is. Everything runs smoothly without pause. Even on mediocre hardware, the OS feels quick. Add to that the constant visual feedback and animation in transitions and the experience is just cool. Also out of the shoot, Microsoft’s choices of fonts and font sizes make the display clear and easy on the eye – almost playful. The tiled, mosaic home page makes getting to what you want quick and painless. Since I use just a few apps, most of those being native to all leading phones, I can always get to what I want fast. That seems to be the goal of phone and one which it achieves . . . in a 1.x sorta way.

Email is my primary app when using a smartphone. Setting up the stock email program to work with my Gmail account (including contacts and calendar) was as easy as setting things up on Android. I’m still a bit unsure if Windows Phone uses all the correct labels from Gmail for spam, trash, etc. There’s also no basic “Archive” button, which I have become quite used to in Gmail. Also missing is threaded messaging. For some, that’ll be enough of a deal killer. Apparently, MS is going to add it in a later rev of the software. Moving between messages is easy and reading them even easier. With threaded messaging and a little more Gmail integration, this email app could blow away the stock Android app. For example, it’s much easier to move a message to a folder (change its label in Gmail-speak) than on the Android Gmail app.

Contacts get a little weird. If you’re big into email like me, your contact list is critical. Windows Phone sucks down your contacts from Facebook and merges them with your other contacts. I don’t like that at all. Segregation of contacts is important to me. I have Facebook “friends” who shouldn’t be allowed to mingle with my real friends, if you know what I mean. Apparently, there’s a way to sorta separate them, but there’s still bussing between the lists. Funny enough, Twitter followers or followees are not allowed to participate here – at all. Word on the street that this will be addressed in the next version.

As you’d expect, there aren’t many apps available. Important ones like Evernote are, but other basic ones aren’t yet there. One gets the idea that they’re coming. Just very slowly. If you’re an app hound, the list may never be long enough on this OS for you. For me, I think the key apps will be there shortly.

Perhaps the biggest current failing of the phone is no multi-tasking. Actually, I shouldn’t say that there is no multitasking, the native apps seem to do it just fine. Zune runs in the background, mail downloads in the background, etc. It’s just not available to third party apps. MS has to rectify this or this phone will be a total loser. Again, apparently they’re workin’ on it. Funny, it seems like they should know something about implementing multitasking, huh?

Browsing is fast and efficient. SMS is more than reasonable. Oh yeah, the phone works great, just like a phone should.

I was pleasantly surprised with Windows Phone 7. Can Microsoft pull it off and become a contender? I hope so. Not only because I’m a MS fan, but because I’d love to see more competition driving this market.

 June 27th, 2011  
 Gadgets, Mobile, Software  
 1 Comment

Working on a WordPress Plugin

It’s been a long time since I’ve written any serious code, but that doesn’t keep me from dabbling every now and again. Of course, I always find myself on the steepest part of the learning curve when I come around to engaging with a new compiler, debugger, language or environment. Since I do it rarely, I tend to forget everything I learned the last time and usually end up changing something major between forays – language, environment, libraries or something – it’s new every time. That’s OK, but it takes a lot of time and energy to simply catch up let alone move forward.

My latest trial is creating, (more like changing and adding) code written in PHP. Specifically, I’m making a cut at taking over a now unsupported plugin in WordPress. As with most languages and environments, that means I have to ramp in several domains. PHP, SQL and WordPress, primarily, but there’s a bunch of smaller stuff too.

The plugin I’m working on is a branch of Now Reading Reloaded which itself is a branch of the original Now Reading plugin. The authors of both decided that they didn’t have time to continue to enhance them. After spending about a week getting my head into the process, I don’t blame them.

Now Reading Reloaded allows me to track and comment on books I read and lets me keep a virtual library that I can access and share on my blog (see the widget on the left with all the pretty book covers or the Library link in the menu for the full list). For the most part, it works well. I have made some modifications to it in the past, but minor, visual ones primarily. What I want to do is make some functional modifications that require changes deep in the guts of the code.

WordPress is conceptually simple and PHP is pretty straightforward (it’s a scripting language, though, and therefore its is always a bit funky). SQL is SQL, arcane as always, but totally standardized. Put them all together, though, and it’s somewhat dizzying, at least for a newbie at it like me. The structure of WordPress plugins is regimented, but is too complex to allow one to just dip a toe in the water. I’m going to have to do a deep dive if I’m going to pull this off.

Here goes . . .

 March 8th, 2011  

Fixing Broken MP3 Files

About six months ago, I updated to a new version of iTunes and found that only about 20% of my MP3s could be added to my library. That’s not 20% of the new ones, but 20% of the songs that were already loaded in the previous version of iTunes (I’m omitting the long story of why I cleared my library and reloaded it). So, out of my 3,000 or so songs, iTunes only recognized about 600 of them. The first thing that went through my head was that Steve Jobs had personally blacklisted almost all the tunes I like. Come on . . . that’s now more rational than your first knee-jerk reaction to it, right? After dismissing the idea that Mr. Jobs might be carrying out a personal vendetta against me, I really panicked when I thought my music had somehow actually become corrupted. But no, the music was recognized by the myriad of other music players on my computer without any problems – I could even play them without issue from Windows Explorer.

My next thought was, do I even care? After all, the other music players were playing my music just fine. The trouble is that if you want to get music on any of the latest generation iPods/iPhones, you need iTunes. Eventually, the other players will catch up, but Apple seems determined to stay ahead of them, making iTunes the only method for transferring music and building playlists on cool, little Apple music players. Nothing like openness . . .

Figuring out what was actually wrong was a challenge. Internet searches yielded almost nothing. While I suspected that iTunes itself was the root cause of the problem – screwing up the files at some point in the past – I couldn’t verify that. So, I had to hack at searches until I stumbled upon MP3val, a free tool (Windows only) that checks the integrity of any MP3 file. Once run, I found out that the headers of most of my MP3 files were corrupted. Recent versions of iTunes, apparently are a real bitch about having proper MP3 headers, so rather than telling me what was wrong, they just chose not to import the files with “problems.” I used MP3val to “fix” a few files and iTunes imported them without any issues. Finally, a solution.

But not so fast . . . while fixing the headers, MP3val lost most (almost always all) of the tag information in the header – title, artist, album art, album title, song title, etc. So, to really fix the files, I had to run them through MP3val and then re-insert all the tag information again. A serious pain in the ass. I experimented with several methods and came up with this one. If you have such a problem. I hope this helps.

  1. Make a copy of the directory tree where your music is – everything. You never know when you’ll mess something up even more and have to restore from your copy. Don’t worry about the disk space, assuming you have what you initially need, you’ll delete this backup when you’re done.
  2. Now, iterate through manageable segments of your music library (a hundred files or so at a time if you can – I keep my music in subdirectories broken up by genre and/or date so I just dealt with one directory at a time) – and perform each of these steps on the group of files:
    • Use your favorite tagging program (iTunes, obviously won’t work) to create new filenames for the songs that contain all the tag information from each MP3 file (they’ll look something like this: “Doobie Brothers – Long Train Runnin’ – Best of the Doobies – 02 – Rock – 1976.mp3.” If your problem is similar to mine, you’ll be able to get everything except album art into the tag. I used MP3tag’s “convert” function to do this. The idea is to retain as much tag information as possible prior to running MP3val to fix the header.
    • Then, use MP3val to “scan” then “repair” the files with problems. You’ll likely, but not always, lose all existing tag info.
    • Go back to your tagging program and reverse the process in number 1, above. Convert the file name back to tag information.
    • If you have missing or incorrect tag information, now would be a good time to fix it. This includes importing album art. Most tagging programs can look up the album art for you automatically assuming you have the correct album name.
  3. Once you’re sure everything worked out, delete the backup.

There you go, just three steps. OK, one of them is complicated and if you have to go through 2,400 files, time consuming too. When I was done, iTunes read ‘em all. I also got the anal-retentive monkey off my back by making sure all my music had it’s correct information and album art. While I still hate iTunes, I recognize its necessary evilness and am coping with it thanks to MP3val and a boatload of time to fix everything.

Related articles

Enhanced by Zemanta
 October 28th, 2010  
 How To, Software  
 Comments Off on Fixing Broken MP3 Files

Build Platforms on Platforms

Being a software guy myself, I often find that I dig a little deeper into the successes and failures of the software-oriented startups that I work with than I do with the non-software oriented ones.  When I do, I suppose that I shouldn’t be surprised, although I routinely am, at how often I come across some very consistent and basic technical errors that are made by these companies.  Chief among these is the lack of thorough thinking about the architecture of the end product prior to the start of coding.  It’s, of course, natural to start hammering out code as fast as possible in order to get a product to market but, inevitably, the Piper needs to get paid and fundamental problems with the architecture will eventually require a wide-spread rewrite of the system or, even worse, will be a serious resource drain and time sink to in every future release.

You’ve probably read dozens of books that have discussed the importance and value of planning and how time spent in architecting a system is a drop in the bucket compared to the time it saves on the back end.  I neither have the skills nor the eloquence to drive that point home any better.  What I’d like to do, though, is to present a high-level view of how you might think about the architecture of your product so that it provides a framework for you to make rapid changes to the application and makes it easy for others (partners, customers, etc.) to extend the product in ways you may not have considered.

There is nothing revolutionary here.  Let’s just call it a reminder that you will end up rewriting your application or, at least, its framework, in the future if you don’t adopt something like this early on.  You may not see it yet, but like I’ve already said, that rewrite is going to be very expensive and painful and will ultimately cost you customers, competitive advantage and money.


The idea here is that there are are two programming interfaces.  One separating you’re application from your core libraries or base layer of functions and another separating your application, as well as the lower-level programming interface from the outside world.  The lower level, base programming interface, allows you to build an application virtually independent of the core functionality of the end product.  Architected this way, you can build and test the application and the base code separately and make incremental changes to each part far easier.  In fact, one can be changed without affecting the other as long as the base programming interface remains the same (it needs to be well thought out to start with, of course).

The higher-level programming interface gives you the power to add functionality to your product quickly, using the code in the base programming interface as well as code in the application layer.  Using the application programming interface, you can prototype new functions rapidly and get quick fixes for bugs to users faster.  Perhaps even more importantly, it enables easy access to most of the guts of your system to partners and customers so that they can extend it as they see fit.  This access can be provided without having to publish hooks to the internals of your core system and exposing a boatload of potential problems that foreign calls to those components can create.  If you’d like, though, you can also expose some of that base functionality to the high-level API as is shown in the “optional” architecture slice in the image above.

Simple, yes.  It requires more work up front – both in planning and in coding – but with such an architecture, you’ll be able to roll out new functionality quickly and to fix mistakes as fast as you find them (well, almost).  Ultimately, you’ll get the functionality your customers want into their hands faster than if you hadn’t adopted such a system.  You’ll also be able to continue to roll out enhanced and improved functionality without getting bogged down with thinking about an architecture rewrite or with a huge backlog of nasty bug fixes.

The anxiety about getting your product to market will lead you to think that hacking together a system and refining it later is the way to go.  Virtually always, this is a mistake.  Speed is of the essence, but only the speed which you can deliver sustainable, quality product that continuously stays ahead of the competition.  Look before you leap, it’ll make life so much easier.

 February 1st, 2010  
 Computers, Management, Software, Startups  
 Comments Off on Build Platforms on Platforms

Adobe Lightroom: Floor Wax or Dessert Topping?

Some of you (older readers of this blog) might remember the old Saturday Night Live skit that aired in 1976 with Dan Akroid, Gilda Radner and Chevy Chase in which they argue whether a canned substance by the name of New Shimmer is a floor wax or dessert topping.  As you’d expect from an SNL commercial parody, it’s both.

After using Adobe Lightroom for about a year now, I remain similarly confused about what, exactly, it is.  Is it a photo editor or is it an asset management tool?  Well, it’s both; and neither.  Whatever it is, once you know how to use it, it’s a great tool for doing . . . stuff with your pictures.  Let me explain.

In a photo processing workflow, there is generally a tool for managing photos and how they are stored.  That is, where they are located (locally or remotely) and how they are organized in directory structures and with their metadata.  And, there is a tool for processing or editing photos – actually adjusting the way the photo looks.  Sometimes, these are the same tools.  Most often, however, best of breed tools win out and photographers use separate tools for each function.

Adobe develops the 800-pound gorilla in the photo editing business – Photoshop.  It’s the absolute leader in the segment.  It’s robust, has a million features and loads of add-ons available from third parties.  It also has a huge learning curve and an arcane user interface.  It’s meant to address the needs of a broad range of people, not just photographers.  As such, photographers have to sift through a boatload of functions that they will likely never use and learn techniques to make changes to photographs that aren’t always aligned with how photographers think.

On the asset management side, even Adobe doesn’t try to pack in organizational functions into it’s behemoth.  Adobe has another product called Bridge to fill that roll.  IMO, Bridge doesn’t cut it for many reasons.  It appears to be primarily designed to front-end Photoshop and as such, is stuck with some of the same non-photographic concepts that weigh down its big brother.  It’s also slow.  To me, speed is an absolute critical factor in asset management.  Especially considering that gigabytes worth of photo data are often somewhere on the network, not stored on a local disk drive.  The asset management tool needs to be ultra-fast to overcome network latency.

To address these issues, Adobe came up with Lightroom (actually, it came with Macromedia as part of Adobe’s acquisition of the company).  Lightroom is for photographers – it has a photographic workflow, including asset management tools and photo editing functions.  It doesn’t have all the asset management tools of some dedicated asset management products and certainly doesn’t have all the editing capabilities of Photoshop, but as you get to know it and use it more often, it seems to strike the right balance between the functions and in a way that’s mostly logical to the photographer.

In terms of photo management, Lightroom isn’t as good as my long-time favorite tool, ACDSee Pro.  ACDSee is fast and it’s storage paradigm parallels that of a standard directory tree, making it a natural and easy to understand extension of how the files are physically stored.  I was never in love with ACDSee’s metadata handling though and even though it has editing tools built in, I really had to export photos to Photoshop to get what I wanted done.  The breadth of photo metadata is much easier to manage in Lightroom, but the way photos are managed isn’t as logical.

What’s hard to get used to about Lightroom is that Lightroom maintains all it’s data about a photo in a separate database.  That is, not in the photo file itself.  So, while the database and photos aren’t physically connected, they are both necessary to reconstruct any changes made to the photo.  This is not only true for the photo’s metadata, but also for edits to it.  If you brush over some skin to remove your kids’ zits, those changes are in the database and not in the file.  If you change the IPTC data to give the photo a caption, that’s in the database and not in the photo file.  This separation takes a while to get used to, especially if you use other tools in your workflow.  If you grab the JPEG you just edited in Lightroom in a new tool, you’ll get the original file and not the modified one.

For sure, you can write metadata changes in the Lightroom database out to the file.  You just have to remember to do that, it’s not automatic.  Writing the editing changes is another thing all together.  This requires “exporting” the file from Lightroom which, at it’s best, is a bit confusing.  In the end, like most things Adobe, you have to choose to make Lightroom the cornerstone of your workflow and adopt the way it wants to do things.

That sounds bad, except it does so much so well.  In fact, there aren’t many reasons why Lightroom can’t be the only tool used by the photographer.  No only can metadata be managed as discussed above, but the editing functions are broad and deep.  For those that know about Adobe’s RAW photo handling, Lightroom’s Develop module does all that Adobe RAW does and more.  In my experience, I can make almost all the adjustments to photos that I want without ever leaving Lightroom.  For the few things that Lightroom doesn’t do, I can easily export to Photoshop to get done.  For you Photoshop geeks, Lightroom doesn’t have layers, but you can mask areas of the photo.  There are also no tools for HDR, stitching panoramas and the like.  For those, you have to export to the mother ship, Photoshop, which is pretty easy because . . . Adobe wants you to do it.

So, Lightroom is a dessert topping and a floor wax.  It’s a photo asset management tool and editor.  In my experience, it’s also the best all-around digital photo tool available.  As usual (with Adobe products), the learning curve is relatively steep, although nothing like for tools like Photoshop.  Once you know how to use it (there is loads of help from users on the web), you can modify your photos amazingly fast, create some really cool effects and, ultimately morph the picture in your camera into the one that was in your head to start with.  Isn’t that how it’s supposed to work?

Reblog this post [with Zemanta]
 December 21st, 2009  
 Photography, Software  

Livin’ in the Cloud

When it comes to my data, I’m a suspenders and belt kinda’ guy.  It can’t be in too many places or have too many layers of security.  As with investing one’s hard-earned cash, diversification is critical to success.  As such, I have loads of internal backup and security methods that are part of my routine.  I ghost a copy of my primary drive in my desktop to an auxiliary drive inside the same machine; I have a Windows Home Server in my house which does a differential backup of my files every few days; and I even sync critical files with a USB memory stick that I can take with me if I need/want to.  OK, maybe that’s a couple of sets of suspenders and a belt or two.  What can I say?

I’ve been thinking about also syncing and backing up some data to the cloud over the last six months and took the plunge a couple of months ago.  I’ve thought about what I really want out of cloud storage and have tried several offerings.  I’ll talk about these specifically, but first, a little background on my thinking and what I was looking for.

It seems to me, the when it comes to the storage of data in the cloud, as opposed to the actually use of it, there are three general types of storage solutions – raw file storage, synced/backup file storage, and content-specific storage.  Raw cloud-based file storage is simply disk space somewhere on the internet that you can do whatever you want with (think Amazon S3).  Synced storage is similar, but it’s usually set up specifically to facilitate the synchronization or backup of data between a PC and disk space similarly elsewhere on the net.  Content-specific storage is specifically set up for particular data types like email, photos, music, etc.

When cloud storage is segmented this way, one quickly realizes that all email users have been cloud storage consumers for a while.  Whether you use a basic POP or IMAP server for your email or something heavier duty like Exchange or Notes, your email has been in the cloud at least for some period of time.  So, you, like me, are already likely a user of cloud storage.  This rationalization helped me feel more comfortable about moving my data to someplace unknown.

In the end, I found I was most interested in having storage for backups and syncing to keep multiple computers up to date.  Services for the latter often assume the former – a cloud-based synced storage provider often has nice backup capabilities as well.  After all, backup is the same storage mechanism without the sync function.  I also wanted to expand my specialized storage to include my large photo collection.  For this, I wanted a photo-specific site that offered galleries and photo management.  These, of course, are not offered by the raw or synced backup folks.

While I hardly tried all services available, I did try a few including, Amazon S3, Microsoft’s Skydrive, Microsoft’s Live Mesh, Syncplicity, KeepVault, SmugMug and Flickr.  Here are my thoughts:

  • Amazon S3 – S3 is simply raw storage and it lies underneath many of the other, higher-level cloud storage services out there.  There’s no high level interface per se and, as it states clearly on the Amazon AWS site, it’s “intentionally built with a minimal feature set.”  At $0.15/GB/Month it isn’t even that cheap compared to some other services – 200GB of backup costs $360.  Oh yeah, I can do basic math . . .
  • SkyDrive – It’s “integrated” with Microsoft’s unbelievingly confusing array of Windows Live services.  I consider myself pretty knowledgeable about Microsoft stuff, but this Windows Live thing is hard to understand.  It works nicely, but there isn’t any client on the PC side, really.  Uploading files is done a handful at a time and there is no syncing.  It’s really about sharing files and doesn’t offer any automated backup or syncing.  Even for bulk storage, it’s too difficult to use.  They offer 25GB of storage for free. 
  • Live Mesh – I like Live Mesh a lot.  Live Mesh is all about synchronization between multiple machines, including Macs (beta) and mobile phones (“soon”) as well as online through a web browser.  It works totally behind the scenes, is fast and has the best reporting about what it did and what it’s doing of any service I tried.  It also offers features like accessing the desktop of a Live Mesh-connected computer and a nice chatting and feedback facility for sharing and commenting on shared documents.  My only problem with Live Mesh was the level of file granularity for syncing.  Live Mesh only understand directories, not individual files.  Sometimes, you just don’t want the entire directory synced.  The initial 5GB of storage is free.  It’s still in beta.
  • Syncplicity – It’s my favorite of all the sync/backup solutions so far.  It makes assumptions about the directories you want to sync or backup and adding different ones is a tad confusing, but once you get it, it’s all a piece of cake.  The reporting on what it’s doing isn’t as nice as Live Mesh, but it’s just as seamless and it’s pretty fast (like Live Mesh).  Unlike Live Mesh, individual files can be added or removed from a sync tree by right-clicking them (Windows) and just specifying whether or not the file should be included or not.  Also, it’s easy to specify whether you want files to be synced with other machines or just backed up.  I’m still not completely content with how Syncplicity deals with conflicts.  No data is ever lost, but it can be duplicated leaving copies scattered in your directories.  Also, I had one really nasty problem with the service.  The Syncplicity client was sucking up 10%-50% of the CPU time on my machine – all the time.  I sent messages to Syncplicity support and complained about the problem on their forum.  Nothing, zero, no response for weeks.  In fact, to this day, I’ve gotten no response.  I eventually figured the problem out myself.  A TrueCrypt encrypted volume in a directory on my machine was screwing the client up.  Once removed from the sync tree, the problem was gone.  Just horrible service.  There is a free 2GB trial and then $99/year for the first 100GB.  This is a 50% discount offer that’s been running for a while.
  • KeepVault – I tried this out because it integrates nicely with the Windows Home Server Console.  I’m using it specifically to back up my server – no desktops included and no synchronization, just backup.  It seems to work well, but the initial backup of 150GB of data took about 16 days even when I was not throttling the speed of the connection (a nice option for a server, BTW).  Additionally, the backup process stalled about 20 times during the initial backup.  Now that it’s only dealing with a handful of files, albeit big ones, at a time, it seems to be working well.  Jury’s still out.  No trial, but a 30-day money-back guarantee.  $180 for 200GB of backup.
  • SmugMug – I have 42GB of photos on my server which represent the most cherished of all data I have.  At the very least, I needed to backup these files to another physical location.  At best, it would be nice if the data could be organized and viewed from that location as well.  I looked at many sites, including Flickr (the relative standard in this space) and chose SmugMug.  The difference is that SmugMug is aimed at photographers who at least think there is some level of professionalism in their shots.  SmugMug’s pages are totally customizable and they understand not to mess with pictures being uploaded (unless you want them to).  It’s about the gallery first and about sharing second.  Just what I wanted – I’ve never learned how to share well 🙂

There are loads of other services out there including some I considered, but decided not to try on this first pass – DropBox, ZumoDrive, iDrive, Soonr, Jungle Disk, etc.  In general, I’m feeling better about having my data somewhere else.  The process is easy and, as far as I can tell, secure.  Syncing can certainly get better, though, and when there’s a failure, it’s very hard to debug, even if you can detect that it happened in the first place.  Sometimes, as with any backup, you don’t know there was a problem until an emergency happens and you really need to restore a file.  Not painless, but fairly low barriers to experience.  Come on in, the water’s fine . . . so far.

Reblog this post [with Zemanta]
 June 17th, 2009  
 Computers, Photography, Software  

Lost My Feed with Upgrade to WordPress 2.7

[Two posts were ignored by Feedburner once I got connected: WordPress Upgrade, New Theme and Other Stuff and Gadget Review: VuDu Box.  If you’re interested in either, you can use the links above.]

Yikes!  I just discovered that my feed on Feedburner disappeared after my “upgrade to WordPress 2.7.  Not only were no blog readers notified of new posts, but the feed itself was blank.  After unsuccessfully searching for a solution in the Google Feedburner Help Group and through various other forums, blogs and blogging therapy clinics, I gave up.  While several people had similar difficulties, none of their fixes worked for me.

To make a long story somewhat shorter, after hacking around a lot with WordPress, I discovered that the link to the blog’s “original” feed on my server was incorrect on the Feedburner side.  Specifically, the link was http://www.2-speed.com/wp-rss2.php when it should be http://www.2-speed.com/feed.  I made the change in “Edit feed details” on my Feedburner page.

I don’t know if this was a change resulting from the upgrade to WordPress 2.7 (although I expect it was) or because of the recent change Google has made to the Feedburner service.  In any event, I hope my pain and discovery can help someone else with the same problem.

Reblog this post [with Zemanta]
 February 16th, 2009  

WordPress Upgrade, New Theme and Other Stuff

For the last few days, I’ve been working on my blog – what you see and what you don’t.  This blog is managed by me, but is actually sourced at a web hosting provider.  For a while now, I’ve wanted to create a mirror of the blog on my local server in my house – a Windows Home Server box.  This took me longer than expected, but I now have the blog running in two places.  Suspenders AND a belt.

While I was at it, I upgraded to WordPress 2.7.1, the latest and greatest blogging platform from the wonderful folks at WordPress.  That would have gone easily, except WordPress now has threaded comments (thanks IntenseDebate) which blew up my theme.  Additionally, my old theme had loads of PHP hacks that I had written.  So, I decided to use a new theme,  Atahualpa, which has so much high-level configurability built in, that I didn’t have to touch a single line of PHP to get what I wanted.  Very impressive.  It did not, however, make up for my complete lack of visual design skill as you can see . . .

Somewhere along the way, older posts lost the email addresses of their commenters.  My face is also appearing as the Gravatar of all commenters (yuck).  I’m gonna have to look at this further.  More disturbing is that my URL rewriter doesn’t seem to be doing its job, breaking links to other posts.  That’s gonna require some study as well.

These problems aren’t entirely unexpected and part of what happens when you manage this stuff yourself.  Of course, it’s exactly why I manage this stuff myself.  It’s hard to know how it works if you don’t get your hands dirty.

Reblog this post [with Zemanta]
 February 12th, 2009  

Software Management Guides from an Expert

Long time friend and cohort, Lorne Cooper, has two new posts up on the AccuRev blog that are must reads if you’re in the software development business.  Aside from his role as CEO of AccuRev (I am a board member and investor), which develops and sells software for software developers, Lorne has a long history of running software companies and projects.  In these posts, he shares some of the wisdom he has gained over the years.

Check ’em out.

 July 14th, 2008  
 Leadership, Management, Software  
 Comments Off on Software Management Guides from an Expert

Consumed Writing Software

For a good part of the last couple of months, most of the time I’ve spent in front a computer has been used to explore the current world of software development.  Developing software is how I started my career, and is something which I always had a total blast doing – in an obsessive-compulsive, off-the-scale intense sorta way.  My wife always used to tease me that I had two personas – the software development one and the normal one.  Not that it was all that great, but she liked the latter one a lot more. 

It’s probably worth mentioning that while I enjoyed it and got to write a lot of code that people bought for real money, I was never an A-class developer.  Eventually, I discovered that managing development teams was more of a natural fit for me and I only looked back longingly once in a while.

Things have changed a lot since I last delved into development.  C, which used to be used for just about everything, has been replaced with newer, updated, object-oriented languages inside rich environments that actual make it easy to incrementally build software projects and target them at multiple operating environments and platforms.  As with most things, though, all of this power has created new levels of complexity.  To successfully build even moderately complex applications seemingly requires at least a passing knowledge of several languages and environments.  Because of this, it’s not the writing of code that takes all the time (including debugging), but it’s the ramping up on all the various pieces required to create the application.

For example, my recent journey included spending time using Ruby, Rails, C#, Visual Studio, PHP, Eclipse, HTML, Visual Basic, SQL, XML, CSS, etc.  Even with all the documentation and help available on the web, the confusing set of technologies each take some time to understand enough to be able to use them.

After investigating the list above and a few others, I decided to start writing a web application using Visual Studio, C# and of course, HTML and CSS.  My goal was to be able to put a home weather station on the Net.  This was a crappy choice for a first project since I also had to debug serial and TCP communication.  No guts no glory.

It was a fun ride.  Eventually, I had only a couple of hundred lines of code that implemented the project, although I probably wrote several thousand trying to figure things out.  Since I couldn’t find anything like it mentioned on the web, I published it here.  If your interested, there is a complete description of the project as well as the code at the link.

Even though it took an unreal amount of time, I had a complete blast.  My wife frequently said things during the project like, “stay away from your father, he’s programming,” worrying for the safety of her children.  Or, “uh, oh, he’s coding again – we’ve lost him.” 

I’m going to try to continue to do development at some level so I don’t have the same steep learning curve to climb again.  But for now, maybe, I can use my computer for some blogging as well.

 September 15th, 2007