Entrepreneurial Leadership and Management . . . and Other Stuff


Adobe Lightroom: Floor Wax or Dessert Topping?

Some of you (older readers of this blog) might remember the old Saturday Night Live skit that aired in 1976 with Dan Akroid, Gilda Radner and Chevy Chase in which they argue whether a canned substance by the name of New Shimmer is a floor wax or dessert topping.  As you’d expect from an SNL commercial parody, it’s both.

After using Adobe Lightroom for about a year now, I remain similarly confused about what, exactly, it is.  Is it a photo editor or is it an asset management tool?  Well, it’s both; and neither.  Whatever it is, once you know how to use it, it’s a great tool for doing . . . stuff with your pictures.  Let me explain.

In a photo processing workflow, there is generally a tool for managing photos and how they are stored.  That is, where they are located (locally or remotely) and how they are organized in directory structures and with their metadata.  And, there is a tool for processing or editing photos – actually adjusting the way the photo looks.  Sometimes, these are the same tools.  Most often, however, best of breed tools win out and photographers use separate tools for each function.

Adobe develops the 800-pound gorilla in the photo editing business – Photoshop.  It’s the absolute leader in the segment.  It’s robust, has a million features and loads of add-ons available from third parties.  It also has a huge learning curve and an arcane user interface.  It’s meant to address the needs of a broad range of people, not just photographers.  As such, photographers have to sift through a boatload of functions that they will likely never use and learn techniques to make changes to photographs that aren’t always aligned with how photographers think.

On the asset management side, even Adobe doesn’t try to pack in organizational functions into it’s behemoth.  Adobe has another product called Bridge to fill that roll.  IMO, Bridge doesn’t cut it for many reasons.  It appears to be primarily designed to front-end Photoshop and as such, is stuck with some of the same non-photographic concepts that weigh down its big brother.  It’s also slow.  To me, speed is an absolute critical factor in asset management.  Especially considering that gigabytes worth of photo data are often somewhere on the network, not stored on a local disk drive.  The asset management tool needs to be ultra-fast to overcome network latency.

To address these issues, Adobe came up with Lightroom (actually, it came with Macromedia as part of Adobe’s acquisition of the company).  Lightroom is for photographers – it has a photographic workflow, including asset management tools and photo editing functions.  It doesn’t have all the asset management tools of some dedicated asset management products and certainly doesn’t have all the editing capabilities of Photoshop, but as you get to know it and use it more often, it seems to strike the right balance between the functions and in a way that’s mostly logical to the photographer.

In terms of photo management, Lightroom isn’t as good as my long-time favorite tool, ACDSee Pro.  ACDSee is fast and it’s storage paradigm parallels that of a standard directory tree, making it a natural and easy to understand extension of how the files are physically stored.  I was never in love with ACDSee’s metadata handling though and even though it has editing tools built in, I really had to export photos to Photoshop to get what I wanted done.  The breadth of photo metadata is much easier to manage in Lightroom, but the way photos are managed isn’t as logical.

What’s hard to get used to about Lightroom is that Lightroom maintains all it’s data about a photo in a separate database.  That is, not in the photo file itself.  So, while the database and photos aren’t physically connected, they are both necessary to reconstruct any changes made to the photo.  This is not only true for the photo’s metadata, but also for edits to it.  If you brush over some skin to remove your kids’ zits, those changes are in the database and not in the file.  If you change the IPTC data to give the photo a caption, that’s in the database and not in the photo file.  This separation takes a while to get used to, especially if you use other tools in your workflow.  If you grab the JPEG you just edited in Lightroom in a new tool, you’ll get the original file and not the modified one.

For sure, you can write metadata changes in the Lightroom database out to the file.  You just have to remember to do that, it’s not automatic.  Writing the editing changes is another thing all together.  This requires “exporting” the file from Lightroom which, at it’s best, is a bit confusing.  In the end, like most things Adobe, you have to choose to make Lightroom the cornerstone of your workflow and adopt the way it wants to do things.

That sounds bad, except it does so much so well.  In fact, there aren’t many reasons why Lightroom can’t be the only tool used by the photographer.  No only can metadata be managed as discussed above, but the editing functions are broad and deep.  For those that know about Adobe’s RAW photo handling, Lightroom’s Develop module does all that Adobe RAW does and more.  In my experience, I can make almost all the adjustments to photos that I want without ever leaving Lightroom.  For the few things that Lightroom doesn’t do, I can easily export to Photoshop to get done.  For you Photoshop geeks, Lightroom doesn’t have layers, but you can mask areas of the photo.  There are also no tools for HDR, stitching panoramas and the like.  For those, you have to export to the mother ship, Photoshop, which is pretty easy because . . . Adobe wants you to do it.

So, Lightroom is a dessert topping and a floor wax.  It’s a photo asset management tool and editor.  In my experience, it’s also the best all-around digital photo tool available.  As usual (with Adobe products), the learning curve is relatively steep, although nothing like for tools like Photoshop.  Once you know how to use it (there is loads of help from users on the web), you can modify your photos amazingly fast, create some really cool effects and, ultimately morph the picture in your camera into the one that was in your head to start with.  Isn’t that how it’s supposed to work?

Reblog this post [with Zemanta]
 December 21st, 2009  
 Photography, Software  

Kodachrome is Dead. What Can You Learn From Its Death?

Way back before digital photography, when dinosaurs roamed the surface of the planet, families packed themselves into smoky living rooms to watch trays full of color slides projected onto uneven plastered walls.  These photo viewing sessions along with some of the most outstanding print photography in history were brought to you by Kodachome, Kodak’s long-lived transparency film.  Kodachrome, created by Eastman Kodak in 1935 (specifically, by scientists Leopold Godowsky and Leopold Mannes, known as "God and Man" inside Kodak), has been around longer than any photo product in history.  Today, Kodak announced that it is “retiring” Kodachrome.

When I was young and dipping my toe into the very deep waters of photography, I primarily used black and white print film which was my stock in trade because I could process and print it myself.  When I wanted color, I used Kodachrome and it’s younger sibling, Ektachrome.  The colors in Kodachrome were great – much better than color prints at the time.  In retrospect, slides held up way better than prints as well and are much easier to scan into their digital versions.

The passing of such a product is a reminder about how things have changed and how companies need to be dynamic and change with their markets – hopefully leading them.  Kodak has made the switch to digital (they still, of course, produce plenty of film), but in the change, lost the market leadership that they once had.  While I’m sad to see such a landmark product die out, it makes me excited to think about all the potential replacement products in all markets as inevitable change happens.

Nintendo lost its utter dominance of the home electronic game market when Sony, then Microsoft beat them at their own game, using new technology.  Nintendo then reemerged from its failure with another breakthrough product, the Wii.  What’s going to happen now to wristwatch sales as virtually everyone under 25 uses their phone to get the time?  How about compact camera sales?  As cell phone cameras keep improving and software processing on phones gets better, who will want to carry both a compact camera and a phone?  You get the idea.

What about your market?  What fundamental and underlying changes are happening that you can take advantage of?  Don’t think only technological, societal changes are even bigger driving forces.  Whatever they are, get there first and you’ll have a substantial advantage.

Reblog this post [with Zemanta]
 June 23rd, 2009  
 General Business, Photography  

Livin’ in the Cloud

When it comes to my data, I’m a suspenders and belt kinda’ guy.  It can’t be in too many places or have too many layers of security.  As with investing one’s hard-earned cash, diversification is critical to success.  As such, I have loads of internal backup and security methods that are part of my routine.  I ghost a copy of my primary drive in my desktop to an auxiliary drive inside the same machine; I have a Windows Home Server in my house which does a differential backup of my files every few days; and I even sync critical files with a USB memory stick that I can take with me if I need/want to.  OK, maybe that’s a couple of sets of suspenders and a belt or two.  What can I say?

I’ve been thinking about also syncing and backing up some data to the cloud over the last six months and took the plunge a couple of months ago.  I’ve thought about what I really want out of cloud storage and have tried several offerings.  I’ll talk about these specifically, but first, a little background on my thinking and what I was looking for.

It seems to me, the when it comes to the storage of data in the cloud, as opposed to the actually use of it, there are three general types of storage solutions – raw file storage, synced/backup file storage, and content-specific storage.  Raw cloud-based file storage is simply disk space somewhere on the internet that you can do whatever you want with (think Amazon S3).  Synced storage is similar, but it’s usually set up specifically to facilitate the synchronization or backup of data between a PC and disk space similarly elsewhere on the net.  Content-specific storage is specifically set up for particular data types like email, photos, music, etc.

When cloud storage is segmented this way, one quickly realizes that all email users have been cloud storage consumers for a while.  Whether you use a basic POP or IMAP server for your email or something heavier duty like Exchange or Notes, your email has been in the cloud at least for some period of time.  So, you, like me, are already likely a user of cloud storage.  This rationalization helped me feel more comfortable about moving my data to someplace unknown.

In the end, I found I was most interested in having storage for backups and syncing to keep multiple computers up to date.  Services for the latter often assume the former – a cloud-based synced storage provider often has nice backup capabilities as well.  After all, backup is the same storage mechanism without the sync function.  I also wanted to expand my specialized storage to include my large photo collection.  For this, I wanted a photo-specific site that offered galleries and photo management.  These, of course, are not offered by the raw or synced backup folks.

While I hardly tried all services available, I did try a few including, Amazon S3, Microsoft’s Skydrive, Microsoft’s Live Mesh, Syncplicity, KeepVault, SmugMug and Flickr.  Here are my thoughts:

  • Amazon S3 – S3 is simply raw storage and it lies underneath many of the other, higher-level cloud storage services out there.  There’s no high level interface per se and, as it states clearly on the Amazon AWS site, it’s “intentionally built with a minimal feature set.”  At $0.15/GB/Month it isn’t even that cheap compared to some other services – 200GB of backup costs $360.  Oh yeah, I can do basic math . . .
  • SkyDrive – It’s “integrated” with Microsoft’s unbelievingly confusing array of Windows Live services.  I consider myself pretty knowledgeable about Microsoft stuff, but this Windows Live thing is hard to understand.  It works nicely, but there isn’t any client on the PC side, really.  Uploading files is done a handful at a time and there is no syncing.  It’s really about sharing files and doesn’t offer any automated backup or syncing.  Even for bulk storage, it’s too difficult to use.  They offer 25GB of storage for free. 
  • Live Mesh – I like Live Mesh a lot.  Live Mesh is all about synchronization between multiple machines, including Macs (beta) and mobile phones (“soon”) as well as online through a web browser.  It works totally behind the scenes, is fast and has the best reporting about what it did and what it’s doing of any service I tried.  It also offers features like accessing the desktop of a Live Mesh-connected computer and a nice chatting and feedback facility for sharing and commenting on shared documents.  My only problem with Live Mesh was the level of file granularity for syncing.  Live Mesh only understand directories, not individual files.  Sometimes, you just don’t want the entire directory synced.  The initial 5GB of storage is free.  It’s still in beta.
  • Syncplicity – It’s my favorite of all the sync/backup solutions so far.  It makes assumptions about the directories you want to sync or backup and adding different ones is a tad confusing, but once you get it, it’s all a piece of cake.  The reporting on what it’s doing isn’t as nice as Live Mesh, but it’s just as seamless and it’s pretty fast (like Live Mesh).  Unlike Live Mesh, individual files can be added or removed from a sync tree by right-clicking them (Windows) and just specifying whether or not the file should be included or not.  Also, it’s easy to specify whether you want files to be synced with other machines or just backed up.  I’m still not completely content with how Syncplicity deals with conflicts.  No data is ever lost, but it can be duplicated leaving copies scattered in your directories.  Also, I had one really nasty problem with the service.  The Syncplicity client was sucking up 10%-50% of the CPU time on my machine – all the time.  I sent messages to Syncplicity support and complained about the problem on their forum.  Nothing, zero, no response for weeks.  In fact, to this day, I’ve gotten no response.  I eventually figured the problem out myself.  A TrueCrypt encrypted volume in a directory on my machine was screwing the client up.  Once removed from the sync tree, the problem was gone.  Just horrible service.  There is a free 2GB trial and then $99/year for the first 100GB.  This is a 50% discount offer that’s been running for a while.
  • KeepVault – I tried this out because it integrates nicely with the Windows Home Server Console.  I’m using it specifically to back up my server – no desktops included and no synchronization, just backup.  It seems to work well, but the initial backup of 150GB of data took about 16 days even when I was not throttling the speed of the connection (a nice option for a server, BTW).  Additionally, the backup process stalled about 20 times during the initial backup.  Now that it’s only dealing with a handful of files, albeit big ones, at a time, it seems to be working well.  Jury’s still out.  No trial, but a 30-day money-back guarantee.  $180 for 200GB of backup.
  • SmugMug – I have 42GB of photos on my server which represent the most cherished of all data I have.  At the very least, I needed to backup these files to another physical location.  At best, it would be nice if the data could be organized and viewed from that location as well.  I looked at many sites, including Flickr (the relative standard in this space) and chose SmugMug.  The difference is that SmugMug is aimed at photographers who at least think there is some level of professionalism in their shots.  SmugMug’s pages are totally customizable and they understand not to mess with pictures being uploaded (unless you want them to).  It’s about the gallery first and about sharing second.  Just what I wanted – I’ve never learned how to share well 🙂

There are loads of other services out there including some I considered, but decided not to try on this first pass – DropBox, ZumoDrive, iDrive, Soonr, Jungle Disk, etc.  In general, I’m feeling better about having my data somewhere else.  The process is easy and, as far as I can tell, secure.  Syncing can certainly get better, though, and when there’s a failure, it’s very hard to debug, even if you can detect that it happened in the first place.  Sometimes, as with any backup, you don’t know there was a problem until an emergency happens and you really need to restore a file.  Not painless, but fairly low barriers to experience.  Come on in, the water’s fine . . . so far.

Reblog this post [with Zemanta]
 June 17th, 2009  
 Computers, Photography, Software  

Microsoft Live Labs Seadragon

In an obvious move to get more of the technologies being worked on inside Microsoft into our grubby little hands as fast as possible (think Google’s success with it’s perpetual beta solutions), Microsoft has released a couple of new tools/technologies for us to try.

One of these is StickySorter and another is Seadragon.  StickySorter is used for affinity diagramming and Seadragon is used for infinite web-based zooming.  Check it out below.

It’s incredibly easy to use and you can download a program to build what you need or do it online.  Keep up the good work, lab guys.

Technorati Tags: ,,
 November 18th, 2008  
 1 Comment

Checkin’ Out Photosynth

When I saw the coming out demo of Microsoft’s Photosynth technology done at the TED Conference over a year ago, I was totally blown away.

Photosynth automatically assembles a set of individual photos of a particular subject into a three dimensional, explorable universe of the scene. The more photos, the more detail and the more explorable the final “synth.”  It differs from stitching – the process of aligning and joining several overlapping photos to create a single larger image – in that the resulting image is a space rather than a flat 2D image.

When the public beta was introduced a couple of months ago, I was all over it.  I played with other people’s synths and was impressed.  But, of course, I had to give it a go myself.  I decided to throw what I thought would be a difficult scene at it – one with trees.  Trees always give stitching programs fits and, as it turns out, they do the same for Photosynth.  There are just a whole lot of edges to align.

I took 144 photos of a location (you don’t need to take that many, but I wanted to see how complete a scene I could create) from every angle I could get to.  Photosynth cranked on the photos for a while and broke the scene into many different views.  There should have only been one, but the program couldn’t match up the views to form a single synth.  The results are below.

Photosynth reported that my 144 photos were only 23% “synthy.”  Basically, Photosynth could only make heads or tails of 23% of my photos in creating the final synth.  If you look at the synths on the web site, you’ll find excellent ones that are >90% synthy like The Boxer.

The user interface for creating synths is very simple and the program creates synths with virtually no user intervention.  Exploring synths is a different matter.  The browser interface is a bit strange to me.  I’m never sure what the arrows and buttons are supposed to do, even after trying them.  I may be using it to its fullest, but I may be missing the point entirely.  A few more tooltips might be helpful.

You need to download the Synther, which runs on your PC (no Mac support yet).  The Synther will upload the synth to servers in the cloud.  You’ll need a Microsoft Live ID to use the service.  For now, all uploaded synths are public.  Everything is free.

I think this technology has tremendous promise and I plan on playing with it a lot more.  Of course, I’ll report back on my findings.  In the mean time, you may want to give it a try.  It’s easy and very cool.

Reblog this post [with Zemanta]
Technorati Tags: ,
 October 20th, 2008  

The Merging of Video and Still Imagery

The latest generation of prosumer DSLR (Digital Single-Lens Reflex) cameras are not only about taking great still photos, but also about taking great video.  While video has been available in point-and-shoot cameras for some time, it’s been ignored in higher-end cameras – for some market reasons and for some functional ones.

Any excuses that existed before, however, have been punted and the onslaught of DSLRs merging still and motion imagery has begun.  Recently, Nikon introduced the first DSLR to shoot video, the Nikon D90.  And, in the usual tit-for-tat battle for image supremacy between Nikon and Canon, the latter has fired back with a huge salvo – the Canon 5D Mark II

The 5D Mark II sports a ridiculously large and dense 21MP sensor, stealing souls at

5616 x 3744

pixels and shooting full HD video.  I was skeptical about the video part.  First, the form factor of an SLR doesn’t really lend itself to hand-holding for video, and second, multiple-purpose devices often fail at all of their intended purposes.  That is, until I saw the video shot by Pulitzer Prize winning photographer Vincent Laforet.    Check it out here: Reverie – Behind the Scenes.


Only some scenes were shot with the 5D, but I challenge you to figure out which ones they were.  According to Laforet, the video quality of the 5D is much better than that of Canon’s XD XH-A1 dedicated video camera, especially in low light.

 September 23rd, 2008  
 Gadgets, Photography  

Gadget Review: Canon G9

Canon-G9 In the world of photography, I am a Canon guy.  It’s not only that I like Canon photographic products, but I have a big investment in Canon lenses which makes it difficult (read: expensive) to change to cameras from other manufacturers.  My current photographic weapon of choice is Canon’s 5D D-SLR (Digital Single Lens Reflex) camera (reviewed on this blog here).  Additionally, like any self-respecting photographic junkie, I have a wide range of lenses and other stuff from Canon that acts as a crutch, bolstering my otherwise mediocre photographic skills.  All in, my camera and associated equipment weighs about 20 pounds and, in its most portable configuration, fills a reasonable size backpack.

Most often, this isn’t a problem and the chance to get a truly great shot outweighs (pun intended) the inconvenience of carrying the heavy load.  Sometimes, though, an alternative is needed.  Like when on an active vacation or in confined spaces that aren’t ideal for long lenses and really bright flashes.  This is where one of the huge number of compact cameras available comes in.

For the most part, compact cameras are virtually all fully automatic – point-and-shoot, as it were.  The user need but to turn the camera on, aim at a desired target and push a button to steal their soul.  They are the modern equivalent of the original Kodak Brownie, everyone can use one.

Recently, I decided to replace an old compact that I had used for many years with something equally as portable, but with more power and manual control.  My requirements were:

  • Reasonable Sensor Resolution – 8MP should suffice (more on this later)
  • As Large a Physical Sensor as Possible – Low pixel density and larger pixels = clearer pictures and less noise.
  • Optical IS (Image Stabilizer) – Image stabilization helps to capture clear pictures where a shaky hand or low light might have otherwise prevented them.
  • Optical Viewfinder – I cut my teeth on SLRs, I like to see the image through glass instead of via an electronic screen – old habit.
  • Aperture/Shutter Priority + Full Auto – I wanted the camera to have a fully automatic mode, but I also want to be able to shoot pictures by fixing either the shutter speed or the aperture myself.
  • Easily Settable ISO Speed – In digital camera terms, the ISO speed setting adjusts the sensitivity of the sensor in the camera – the more sensitive, the better the pictures at low light, with trade-offs, of course.  Most point-n-shoot cameras automatically set it, I want to be able to do it manually.
  • Built-in Flash – Used for fill flash mostly – to light the objects close to the lens so they are not in shadow.
  • Good Battery Life – Nuclear power would be nice.  I just have to find the plutonium section at my neighborhood camera store.
  • Completely Retractable Lens – The lens has to curl up inside the camera.  It makes the camera smaller to carry and protects the lens.
  • Small as Possible Package – I’d like to carry it in my pocket.
  • Reasonable Wide Angle and Long Zoom – I want to get lots of stuff in my photo when I’m close up and be able to get good shots from far away.  My goal is below 30mm wide and over 200mm tele.
  • RAW File Support – I’ll shoot in JPEG almost always, but when I find that really special shot, I want to be able to capture everything with no in-camera processing.
  • Good Macro Mode – I like taking pictures of flowers and creepy, crawley bugs up close.  Having a macro mode that lets me focus within a few inches of the lens would be great.

Whew!  I also wanted a slew of other features like exposure bracketing, fast startup time, adjustable metering mode, etc, but they were less important to me.  Yeah, I wanted a lot, but I figured it was all doable.  I was wrong.

Because of what is a small market for this set of features combined with the fact that they amount to an enormous boatload of technology, there aren’t many cameras that meet these criteria.  In fact, there are none.  The ones that came close (at the time of my (purchase several months ago) were:

  • Panasonic LX2
  • Leica DLux3
  • Ricoh GX100
  • Nikon P5100
  • Canon A650
  • Sony DSC-H10
  • Canon G9

Even though I am a self-proclaimed Canon guy, I had no bias towards any manufacturer.  Especially since none of my existing equipment was going to work with any compact camera anyway.  In the end, though, I thought that Canon’s G9 came closest to my requirements.  Hardly fitting in my pocket, it does fit on my belt (in a geeky, pocket protector sorta way).  I’ve now used the camera for a couple of months  and I’m convinced that a good photographer could make this camera jump through hoops.  It’s very powerful and takes some really good pictures.  That said, it’s not without some issues.

  • For as large a sensor that this camera has (see stats below), there is a surprising amount of noise above ISO 400.  I have to believe that it’s related to the resolution of the sensor.  I guess resolution is what sells, because if this same sensor was made with 8MP instead of 12.1MP, I’d bet it’d be great up to at least ISO 800.
  • The small flash on the camera is often too hot or not powerful enough.  There should be built-in metering for the flash intensity, but it doesn’t always do a very good job.
  • The lens’ widest view is 32mm (35mm equivalent).  This isn’t bad, of course, but 25-28mm would be a lot nicer.
  • At telephoto, the lens is a bit slow (F4.8).  Combined with the ISO noise problem, above, it’s almost useless in low light.
  • The viewfinder is useless, but there is so much information on the 3″ display, which performs well in sunlight, that it’s less of a problem than I had anticipated.

Other than these issues, and a few nits here and there, this camera met all my prescribed needs.

I just carried this camera on my belt on a trip to Europe that had me walking through cities, museums and cathedrals from dawn till dusk.  Carrying an SLR with a few lenses and a flash would have been a tremendous pain in the ass.  The G9 performed stellarly in daylight and well in low light conditions giving me almost all the results I expected in almost 1,000 photos.

While there are certainly some trade-offs in using this camera, it is very good overall and the only camera that comes close to being a truly portable D-SLR alternative in my opinion.

Here are the key specs . . .

Sensor Size 1/1.7″ CCD
Resolution 12.1M Effective Pixels
Lens – Zoom 32-210mm (35mm Equiv) 6X
Lens – Speed F2.8-F4.8
ISO 80-1600
Display 3.0″ TFT (100% Coverage) + Viewfinder
Size 106.4 X 71.9 X 42.5mm (4.2 X 2.8 X 1.7in)
Weight (w/o battery) 320g (11.3oz)
Macro 1-50cm

 July 28th, 2008  
 Gadgets, Photography  

Buying a Network/IP Camera

Network cameras are unlike their webcam siblings in that they are self-contained, IP-addressable units that can operate without an attached PC.  Generally speaking, they have a built-in web server and an FTP client.  They often support telnet and DDNS (Dynamic DNS) as well.  Most also have a small amount of memory.  They are, basically, small computers with a wired or wireless Ethernet connection and a camera.  Oh yeah, as you might expect, they are also a lot more expensive than web cameras.

While I am certainly no expert, I have installed 5 network cameras for personal use to date and have learned enough to know that I’d do it differently if I were to start all over again.  Hopefully, my experience may help others get to their ideal solution faster.

The feature sets of these cameras are difficult to compare on an apples-to-apples basis.  There are cameras made for homes and small businesses and there are cameras made for industrial use and video surveillance.  Knowing what’s important to you before diving in will save you a lot of time and, probably, money.  For example, if you just want to see if a moving van is parked in front of your house being loaded with all your worldly possessions while you’re away, you probably don’t need a camera with a large sensor, high resolution and a lightning fast frame rate.  If, however, your camera is looking out at Old Faithful in Yosemite, you probably want the best video at the highest resolution possible so that you get the most breathtaking pictures possible.  You get the idea.

With that here are the basics . . .

At the remote site, where the camera is located, you’re going to need:

  • Power – we’re talkin’ 120V 60Hz type power (in the US, at least) – these cameras can’t run off the measly power from a USB port
  • A computer or a good router that lets you play with port mapping
  • A weatherproof enclosure for the camera if it’s going to be exposed to the elements and it doesn’t come with one
  • A wireless access point (can be your router, of course) if you’re going wireless
  • A backup power source (UPS, etc – completely optional, but some of these cameras don’t handle hard reboots very well)

Also, at some central location (which because of the wonders on the Internet, doesn’t even have to be on the same continent as your camera), you’ll need a computer or web server where you’ll view the output of the camera or consolidate its images and/or video feeds.

If any of these things seems foreign to you, now would be a good time to hire someone to set your system up, reassess your desire to take on this project, or allocate a lot more time to the project than you ever expected (see The Bower Factor).

Now, how do you choose which camera to use.  Here are the basic questions about functionality you need to ask yourself, IMO . . .

  • Technical details – sensor size, frame rate, resolution – the best cameras have larger sensors (1/2″) with high resolutions (3MP) and really fast frame rates (250 frames/sec).  If your pictures/video are going to be evidence at a trial or your video is going to be used by National Geographic, then you want to maximize all of these.  Smaller sensors (1/4″) shooting sub-one megapixel images at one frame/sec are often fine for what you need and cost a lot less.
  • Do you need audio with your video?  Many cameras have either a built-in microphone, or a connector to control one.
  • Is the camera for outdoor use?  If so, get a camera made for it, not one that needs to be put in a case.  The outdoor enclosure will add size, complexity, weight, etc.
  • How are you going to power it?  If it’s hard to get to or far from power, you need it to use POE (Power Over Ethernet – where power is run through you CAT5/6 cable).  Most cameras do not have this feature.  Some cameras have custom cables that bundle separate power and communication cables.  Not as elegant and more expensive, but work just as well.
  • What do you want out of the camera, stills or video?  This is a tough one.  Most cameras won’t let you query them for a still image without using their own software (you can almost always get a dynamic image from any camera if you use the manufacturers application or access the web server inside the camera from a browser).  This prevents you from putting the image up on your own site.  Most cameras do, however, have a trigger for timed FTP uploads of images so you can have an image sent to your computer/server at specified intervals.  Video handling can be even tougher and many manufacturers require that you use their custom application.  If putting video in your custom application or web site is what you want, make sure that the camera lets you do it.
  • Do you want motion triggering (a picture or video is taken when motion is detected)?  Not all cameras have this and it’s useless for outdoors (if there is almost constant motion in the background – think trees, animals, people).  This feature is great for security purposes where you’re actually looking for motion.
  • Do you want pan and zoom?  If you want to be actively involved in what the camera is pointed at and it changes, you want this feature.  Most often, you need to use the camera’s web page or the manufacturers custom application to control it.  These features also add to the complexity and price.  But, if you need it, you need it.  Make sure you look up user reviews about the particular pan and zoom camera you’re looking at.  There are loads of cameras that fail with bad motors.
  • Do you need wireless?  Before just choosing to go this way, think about the bandwidth you’ll need to feed live video (yes, it is highly compressed).  How far are you from your access point?  How good is the signal?  You know the deal here.
  • Do you need to see what’s going on without much light?  All cameras struggle with low light.  Some handle it better than others.  If you’re buying cheap, make sure you get a camera that switches over to black and white images as the light gets weak instead of just continuing to struggle with color.  For comparison purposes, check out the lux (light sensitivity) of the camera.  The lower the number, the better it’s supposed to handle low-light conditions.
  • How much do you want to spend?  These cameras can get expensive fast.  You’ll need to balance your desire for the features in this list with the size of your wallet.

In terms of specific manufacturers, I can’t claim any particular expertise.  I’ve tried cameras from a variety of manufacturers and I can’t say that any one has stood out.  In fact, I could list fairly significant problems with each camera I’ve tried.  My biggest issue has been that I only began to understand the features and what I actually needed from the cameras as I began to put them in place and get them working. Thus, this list.  Hopefully, it’ll help you figure out which cameras meet your needs right out of the box instead of having to buy-and-try like I did.

Technorati tags: ,
 April 5th, 2008  
 Gadgets, Photography  

Amazon Acquires Dpreview.com

Jeez, I completely missed this one last week.  Apparently, Amazon acquired Dpreview.com on May 14.  The DPreview press release is here.  For those uninitiated, Dpreview is a one-stop shop for in-depth reviews about photographic equipment.  Phil Askey started the site in 1998 as a hobby, but the level of detail in his reviews and his completely thorough explanation of everything he discussed quickly made it the place for photographers (amateurs and professionals) to go to learn about the rapidly changing world of photographic equipment.  Dpreview has about 7 million unique visitors every month. 

I guess it makes sense that Amazon seeks to control some of the domain-specific expertise on the web – along with the users such sites attract, of course.  I have no idea how many people click through and buy stuff after reading Dpreview’s reviews, but I know that it certainly is tempting.  It will be interesting to see how this relationship unfolds.  My guess is that if there is any decline in quality or detail of the reviews, readers will just punt and find another site.  If Amazon can add even more value, other than just an easy purchase completion, then the site will likely get even more popular.

Technorati tags: ,

 May 21st, 2007  

Me and Chuck Darwin

Galapagos Islands Map Courtesy of Wikipedia 

I spent the better part of the last two weeks of 2006 traveling throughout the Galapagos archipelago with my family.  It’s a trip that my wife and I have wanted to do for years, but only convinced our teenage kids that it could be fun as well as educational late in 2005 – it’s a good trip to plan well in advance.

I had high expectations for the trip, having read extensively about the area and it’s history – geological as well as biological.  I was not in any way disappointed.  This is simply a unique and fantastic part of the world that the Ecuadorian government is working hard to keep in its most natural, pristine form.  All the naturalists that accompany visitors on the islands are locals who go through a formal, government-controlled educational process.  In our experience, they are highly knowledgeable and truly care for the land that they are custodians of.  Visitors must be accompanied by naturalists when visiting almost all of the islands.  Good thing, too, since there is something new and amazing to capture one’s interest and needs some form of explanation every 15 feet or so.

The most shocking part of the experience, even if you fully expect it, is the complete lack of fear the wildlife shows for the human visitors.  Kneeling in front of a marine iguana and staring at it in the face does nothing to perturb the animal; visitors have to routinely step over seal lions that litter the paths; and penguins (yup, I said penguins) are as likely to step on your feet as you are to step on theirs.  On visits to certain islands, visitors have to spend much of their time looking a few inches in front of where they are walking to make sure that they don’t tread on naturally-selected, well-camouflaged beasts hidden in the rocks on their path.

Just fantastic and highly recommended.  You won’t be disappointed.

Being a photography geek (note that I did not say photographer), I took about 35 pounds of photographic equipment.  I shot something like 2300 pictures during our time there and I have yet to do any processing of the pictures.  I did weed the set down to about 1300 digital snaps so far, though.  You can take a look at the non-Photoshopped pix here, if you’re interested.  Most of them need, at the very least, some sharpening, but you can certainly get an idea of what’s going on in the Galapagos from them.

Photographing the Galapagos Islands

If you’re interested in photography, want to read some recommendations on what to bring and how to shoot in the Islands or simply wonder what equipment I could possibly stuff into a 35 pound photography backpack, read on . . .

Here’s the equipment I brought with me:

  • Canon 5D – full-frame, 12MP, digital SLR
  • Canon 70-200 f2.8L image stabilization zoom lens
  • Canon 28-70 f2.8L zoom lens
  • Canon 2x extender (costs me two stops)
  • Canon 70-300 f4-f5.6 image stabilization zoom lens
  • Monopod
  • Miscellaneous batteries/chargers and compact flash memory cards

In the end, this setup was NOT ideal, although it worked.  In 35mm equivalents, you need 70-400mm in focal length to shoot everything you want, in my opinion.  Of course, this is hard to get in a single lens.  You can get a lot closer than I did, though.  I generally got off the boat every day with the 70-200mm mounted on the camera WITH the extender.  Giving me 140-400mm zoom range in 35mm equivalents (note, the 5D is a full-frame sensor so I don’t have a cropping multiple).  Most of my shots were taken with this configuration, but I often had to do a quick change to either remove the extender (getting me down to 70-200mm) or put on the wide(r) angle lens (getting me as wide as 28mm).  This would have worked fine, except that,

  1. the environment is often pretty dusty and even with my best efforts to protect everything, crap found its way onto my sensor and,
  2. just because the wildlife isn’t afraid of you doesn’t mean that it sits still waiting for you to ready your shot.

I guy that I often spent time with on the trip had an unusual 35-300mm lens (he was shooting with a Canon 10D – with it’s 1.6 cropping multiplier, his 35mm equivalent focal length was 56-480mm).  While he gave up some speed to my setup, he could quickly get shots that I often missed.  Additionally, since he never removed his lens from his camera body, he never had to deal with foreign objects making the camera’s sensor their new home.  Since the Archipelago is at the equator, there is usually plenty of light.  Sacrificing a few stops of lens speed is a non-issue when you consider the additional shots you can get.

I shot all the pictures as JPEGs.  I was tempted to shoot in RAW mode, but felt that the loss of speed (in frames per second) would keep me from getting some of the action shots (e.g. birds flying or diving) that I wanted.  I don’t think I’ll regret this decision.  Besides, I’ll probably never get around to processing all the pictures as JPEGs.  Having to process that many RAW pictures might take me a lifetime.

Finally, I need to say something about the IS lenses that I brought.  I purchased the 70-200mm IS lens for this trip (it replaced a non-IS f2.8L 70-200mm Canon lens that I’ve had for years).  This lens is great!  The documentation says that the image stabilization can make up for two full stops of lens speed.  I completely believe it.  I often took completely clear handheld shots with this lens at full zoom WITH the extender on (400mm).  Because it was so good, I stopped carrying my monopod.  It was a great upgrade and will become my standard, everyday lens.

 January 11th, 2007  
 Misc Thoughts, Photography  
 Comments Off on Me and Chuck Darwin