Note: Riley Brandt has released some video lessons on using Darktable and other open source software for photo development. I highly recommend checking them out.
Here we are getting into the good part. I've become quite fond of Darktable's RAW developer. The tools are solid and in some areas it even outstrips Lightroom's develop module.
The first thing I noticed is that Darktable doesn't come with a whole lot of modules enabled out of the box. That's easy enough to remedy though, simply expand the area labeled "more modules" and enable the modules you'd like to use. If you click through a few times it will add the module to your favorites list. I recommend enabling color correction, profiled denoise and monochrome as those have been some of my favorites thus far.
Darktable groups modules together in a few logical bins. These groups are called Active, Favorites, Basic, Tone, Color, Correction and Effects in order from left to right just below the histogram. Active is handy for disabling adjustments without digging through the other groups. The Favorites grouping contains all of the modules you've stuck in there. This is analogous to web browser bookmarks. The rest are sort of self explanatory. Things like contrast and exposure are found in the Basic group, tone curve and levels in the Tone group, color correction and input color profile in the Color group, lens corrections in the corrections group, etc. It's pretty logical once you learn your way around the interface but it does take some getting used if you're coming from Lightroom.
I'm not a huge special effects/post processing type photographer so I usually hit a few select modules and export my finished edit. Darktable does store edits in XMP files but not every other software package supports the same adjustments in the same way so it's nice to have a 16-bit TIFF of your final. My first stops are usually input color profile and base curve. Darktable supports camera ICC profiles but not DCP profiles. This posed an issue for me early on as I had been using a ColorChecker Passport and the X-rite software which only generates DCP profiles. This is fine if you're using Lightroom or Camera RAW but not much else. Apparently Argyll CMS supports the ColorChecker Passport with the correct template (which I downloaded). However I could never get it to recognize the test pattern. Not all is lost, I managed to get a working ICC profile from dcp2icc after using X-rite's software on my Mac to create the DCP profile.
Base curve is my next stop, I usually choose the preset that goes along with my camera model. Base curves take the RAW data and essentially tunes it to look good for your monitor. This is similar to the image treatments your camera does to JPEGS when you set it to Camera Standard or Camera Landscape. Darktable has a few built in base curves for each manufacturer and you can create you own as well based on JPEGs from your camera.
If the image needs some exposure tweaks I'll hit up the exposure module. This module works about the same way way as every other exposure adjustment I've ever seen in a RAW developer. Darktable's masking capability is quite good, so it's easy to just paint in adjustments over parts of the image. If no adjustments are needed I'll move on to levels, tone curve, contrast, saturation and local contrast. It's nice to have levels and curves in a RAW developer and when I do end up back in Lightroom for whatever reason I find myself missing it. Local contrast is the closest thing I've found to the clarity adjustment in Lightroom and it works OK, although it seems to highlight noise more than accentuate details. I've been primarily using the tone curve tool to adjust highlights/shadows instead of the highlights and shadows module as it seems to give better results. That's just my findings, you may like the other module better.
Speaking of noise, be sure to take a look at the profiled denoise plugin. It's a bit complicated looking at first but after some trail and error I've found some settings that seem to work fairly well. The wavelets mode seems to work better with color nosie and the non-local means seems to work better for chroma noise. Thankfully in Darktable can apply the same filter multiple times in multiple modes. Generally I'll duplicate the profiled denoise plugin and set one to wavelets, strength of 1 and blend it uniformly in color mode. For the other I'll set it up as non-local means, strength of 0.5 and blend it uniformly in lightness or HSV lightness mode. Those settings have worked well for me, but you may want to tweak things.
On a side note I'd also recommend looking at the color correction module. It's more of what I'd call a special effect as it's most similar to to split toning in Lightroom. But it's very useful for color grading your images. Much more customizable too.
That's about it for a quick pass through the RAW developer. This isn't intended to be a full tutorial, just a quick breeze through some of the steps I use to edit my RAWs. Next up I'll look at the GIMP and a few specialty programs out there for photo editing in the Linux world.
Note: I have been relatively busy of late and I've been constantly revising my workflow and thus these posts. Hence the break in between updates. Still working on getting it dialed in. With Adobe's recent move towards punishing perpetual license holders I imagine alternatives will be getting some more attention.
Now we're ready to so some actual photo work. The first step in any photographic workflow is importing, organizing and tagging. Personally, I prefer to keep my photos organized on the filesystem in lieu of using any sort of albums or collection features of a particular software. This is for a couple of reasons but the main one is to remain independent of whatever development package I'm using. If Aperture and iPhoto have taught us anything it's that these things are moving targets and it's probably not wise to bolt yourself down to any particular application. Most operating systems and desktop environments support meta data in some fashion so in a lot of ways the organizing side of a lot of these applications is a bit redundant now. Mostly Lightroom, Aperture, Darktable, etc just add a GUI that's better suited to managing photos than the file manager built into your OS.
The next little bit is going to be pretty OS independent. It's also my personal way of doing things. This might not work for you or it might not make any sense. That's fine. I organize my photos into directories based upon a few criteria. Mainly I separate them out by either what type of job they were, if they were part of a series or a project or just random day to day snapshots. For example I'll have a Photos directory with sub-directories called 000_Projects, 001_Screenshots, 002_Clients, 003_Models_and_Tests, 004_Photos_by_Year. Inside of those directories I'd have directories named after the clients, job, date, or other criteria. From there I just copy the files over like any other document. After all RAWs and JPEGs aren't any different than other types of files.
Some people like to rename files off the camera, I don't do that. I worked with other photographers on jobs or as a second shooter so I have my camera doing it's own custom naming with my initials (eg LGH_XXXX.NEF). I generally don't use something like Photo Mechanic or Rapid Photo Downloader to cull or backup my files either as I'm generally set into my current workflow. However do recommend giving Rapid Photo Downloader a shot if that's your thing. My shooting style is more deliberate instead of spray and pray so generally don't have many photos to throw out when I get back to my desktop. However, I've used Lightroom for this purpose and continue to use Darktable to cull. Again, this may not be ideal if you generate a ton of images per assignment/vacation/outing/whatever. For whatever reason I like to shoot like I still have a roll of film in my camera and I don't fill up cards.
My next step involves tagging the images and applying meta data. In the case of RAW files Darktable write this information out into an XMP side car file. Lightroom will do the same thing, but you have to turn on that option. I prefer this to storing the metadata in a monolithic library as it's more portable. Darktable has pretty rudimentary metadata editing support but it gets the job done. The presentation could be a little more polished in my opinion and support for a few other fields added. Hopefully more refinements come in future versions. The lighttable module is probably the weakest point of the software right now, but it's still very usable and highly customizable.
Darktable has a few presets for metadata. I started by using one of the Creative Commons options, fill in my name for the creator, and import from there. In the metadata panel you can customize the defaults and create your own presets if one of the defaults doesn't cover you. Unfortunately Darktable does not seem to support the full IPTC Creator fields as of the writing of this post. At least I haven't found anywhere to editing things like the address, phone number and site address fields. However, I simply put my contact info in the tags including my website. Not so much for preventing infringement as for directing people to my site from images they find on places like 500px and Flickr that display the tags. Other than that I don't go crazy with tags. It's definitely one of those less-is-more things. I limit it to 10-15, usually including the subject, name of the client, location, etc. All of this works just like that other commercial product everyone else uses.
Hopefully that was helpful and not too rambling. I keep updating and revising this workflow as I go so it's taken a while to get this all written down. Next up I'll dive into RAW development. That will be more about Darktable and not necessarily specific to Linux as well.
Perhaps the most important part of any graphic workflow is color management. A colorimeter should be up there with a camera and a lens on your list with of equipment to buy. Without one there's just no way to ensure reproducibility of your photos. Colorimeters are relatively cheap nowadays and even the basic ones are more than enough for discerning photographers. If you don't have one I recommend going and picking up the ColorMunki or ColorMunki Display right now. These are great inexpensive colorimeters that work well and last many years. My ColorMunki Display works fine with Xubuntu 14.04 and I imagine most USB colorimeters would.
Once you have the hardware you'll need the software. On Ubuntu based distributions this is very easy. Somewhere between falling off a log and screwing in a lightbulb. You can either used the built in display calibration tool in regular Ubuntu or install dispCalGUI via the Software Center or web download for Kubuntu or Xubuntu. If you are using one of the Ubuntu derivatives as I am you'll need to install whatever bridge you need for the desktop environment and colord if it doesn't have it by default. For Xubuntu and XFCE that's xiccd. Again not terribly hard. This is so the resulting profiles can be loaded by XFCE/Unity/KDE/etc. In standard Ubuntu you won't even have to do that much if you use the built in tools, however I recommend trying out dispCalGUI anyway as it supports more features of the colorimeter's hardware, such as ambient light detection.
As far as I can tell there's no calibration reminders in dispCalGUI like the X-rite software. I may have missed it somewhere though. I just put a reminder in my calendar to overcome that problem.
All in all display calibration is pretty simple in Linux. If you can do it on Windows or OS X you can do it on a modern Linux distribution. The whole process is about as hand-holdy as it gets.
Next up I'll go over organizing, tagging and metadata handling.
Let me preface this by saying I've used Macs since the System 6/7 days when I was in middle school. I've also owned/built PCs during that time. I'm a general technology kind of person. I've used both platforms for almost every task imaginable. Even with that OS X has remained my platform of choice for important work over the last fifteen or so years. Both photographic and otherwise. So keep that in mind before you get your drawers in a bunch and fire off an angry email about me being a Mac hater or not hip enough to get it. I was writing code on a Mac back before the rest of the world figured out the Apple was cool and Steve Jobs knew what he was doing. I remember treating my Titanium PowerBook with the utmost care so I didn't ding the paint. I used Macs before the iPod was even a thing. I remember organizing and editing photos before Lightroom existed. So get off my lawn you darn kids!
OK, with that out of the way we can get to the meat of the post.
A few months ago I started looking at a replacing my aging Mac Pro. The final straw came when 20 of my 24GB of RAM failed and one of the RAID disk went out. Old, slow CPUs and a lack of memory really hampered my work. I RMA'ed the RAM but that would take a while to come back, FB-DIMMs were expensive when they were actually being mass produced and have only gone up in price in recent years. Ordering more RAM to hold me over seemed to be throwing good money after bad at this point.
I don't really care for the new iMacs. Part of the reason I've held on to this Mac Pro so long is because it uses easily replaceable and upgradeable parts which greatly extends it's service life. I've added a USB 3 card, upgraded the graphics to support three monitors, added more RAM, and more drives over the years. These days you can't even replace the hard drive on an iMac with off-the-shelf components due to the specialty firmware Apple uses. The new Mac Pro tube is firmly out of my budget (thanks, homeownership) and really isn't that great of a deal right now due to the hardware being aged. However, if I had an infinite budget I'd probably get 6-core model with 32-64GB of RAM, a few 1440p 27” anti-glare displays and a USB 3 or Thunderbolt disk array and call it a day. But that wasn't happening.
I looked at what kind of Mac I could get with my budget. It basically came down to a 21.5” iMac with an i5 or an i7 and 8-16GB of RAM. Not exactly appealing coming from the beast that is a Mac Pro. That felt like a real step down, even given the age of my machine.
The same amount of money would get me a killer DIY PC however.
Now, in defense of the Mac option if most of your work is Lightroom and basic photo editing even the cheapest Core i5 iMac with 8GB of RAM is enough. Heck, a basic Mac Mini would probably be enough. Especially if you have a smaller resolution sensor or older camera. I do more than basic photography with my machine. Stacking for astroimaging or panorama stitching with a high resolution camera eats up RAM and CPU time. I'd like to move into 4K video at some point in the near future too, as well as some rendering, and yes, even some gaming.
That doesn't explain the choice of Linux as the OS. That's simply a matter of personal preference. The philosophies of using open source aside I have a few other projects that are simply easier to work with on a *nix based OS. I'm also not a fan of Windows 8.1's interface or of the software as a service path Adobe is taking as I was an every-other-version-at-most upgrader of CS. I don't use Photoshop enough to justify the monthly price tag and didn't want to wait around for Lightroom to head that direction. I mostly used Photoshop for panorama stitching, HDR merging, some stacking and the occasional cloning/content-aware fill. All of these can be, with varying degrees of difficulty, accomplished with other software and have been around since CS3 or so. No real need for me to be forking over $10 a month for 5-7 year old features.
There's also the whole “if you do everything the same way the rest of the wold does you'll never learn anything” idea. Practically every serious photographer on the planet uses Lightroom. It's fine software and does a lot of things very well. If it ran on Linux I would have it installed (hint hint Adobe). But to me that means your photos are just going to end up with that “Lightroom look” to them. If you shoot RAW your choice of RAW processor matters a lot. Adobe Camera RAW is very versatile but at the same time it can be somewhat limiting and definitely imparts its look on a finished photo. It's the same reasoning why some photographers choose to use PhaseOne. It just produces a different look.
Every once in a while I just want to try something different with my post processing too. See what else is out there and how it stacks up and what I can accomplish. I picked this time to try this out is because this is a slow time of the year for me. If I was in full shoot/edit mode I wouldn't have the time to sit down and learn new software. It has been a journey, that's for sure. There aren't a lot of photographers going this route and aren't as many resources available to those tho choose to go this way. You'll really need to know how post processing works, what tools do what and be able to figure out problems on your own.
I'm writing this from the perspective of a very experienced Linux user. This isn't a how to dual boot your PC or why should you run Linux series. I'm not going to explain what options on ls do what or how to fix your grub configuration. I've been a Linux, BSD and other *nix user for a very very long time and have accumulated a lot of background knowledge that has come in handy for this process and will not be going into detail with that here. You should be comfortable with the OS already before getting into these posts. With that I'm sure I just lost about 90% of the people reading this.
I'm breaking this down into a few bite-size chunks. The next post will deal with color management, afterwards I'll deal with organizing, keywording, metadata, etc. Finally we'll get into RAW development, importing Lightroom edits and more complex editing. I'm also going to stress that this is an experiment. I still have a MacBook Pro loaded with the usual Adobe branded suspects. I will continue to keep Apple laptops. I may end up back on a Mac desktop in the future. I'm still ducking out to Lightroom and keeping it's library up-to-date with my photos. This is not a finished processes and may never be. It should really just be viewed as another set of tools in your box. I'm offering this as more of an interest piece than an advocacy piece, what works for me might not work for you, YMMV, yadda, yadda. Don't go around telling your friends some guy on the internet said Linux is the best for photography and they're a tool for using what they do, cause that's not what this is about.
In the end it's about making great photos and you should stress about the process as much as most of us do. Just use whatever works for you.
Yesterday I stumbled on a Reddit post about the launch attempt out of Vandenberg AFB. The payload belongs to the National Reconnaissance Office which is a pretty secretive place, the launch patch can be viewed below.
Somewhat mysterious and not as elaborate as other NASA patches. It could be a clue to the payload but given the history of the patches I'm going with "hey guys, let's give the graphics people a fun project and screw with some conspiracy theorists." It seems to have worked given the amount of time devoted on various blogs and boards to deciphering their meaning. Speaking of historical patches here are a few that stuck out to me:
The Latin reads "better the devil you know."
NROL 39 - Space Cthulhu
A little over a month ago I turned 30, earlier this year we bought a house and I've recently started looking at strange things called "retirement" and "life insurance" instead of NewEgg, car parts or camera stuff. I don't get it either. There's something terrifying about realizing I could work in my current job until I'm too old to do anything else. That is the stuff of nightmares.
First off the house. We bought a nice place that's kind of out in the middle of nowhere. I've gone from maybe putting gas in my car once a month to doing about 175 miles a week of commuting. The garage is awesome and the house itself is huge. Our Realtor thought we should look at something smaller as this place is really setup for a 4+ person family. But we were largely tired of being cooped up in our ~700-800 square foot apartment and being right on top of each other all the time. Megan and I didn't even have separate office spaces and for those of you who know how I am about my desk can see how that was less than ideal.
The irony of the whole thing is that we mainly bought the place because of the huge bonus room upstairs and we almost never go up there. Seriously, a hobo could be living in it right now and we'd never know it. The first floor is much bigger than we thought so we generally stay downstairs.
I'm still not entirely sure on this whole provincial living thing. It's quiet but the connectivity is awful and I miss being able to walk to work. However there are no loud parties, drunk kids in the front yard, drive by paintings of the neighborhood, sketchy magazine sales people trying to scam you, other seemingly sketchy folks banging on the door asking for rides (we lived less than 30 feet from a bus stop), etc. All of those things actually happened by the way. At my core I think I'm a town or city dweller. I grew up in the countryside and to be perfectly honest I don't have terribly romantic feelings about it. I do miss curbside trash pickup, cell phone service and actual broadband. This DSL is what I call "for humor purposes only." We had to get an actual land line telephone too. However Megan is exceedingly happy out here so I'm content for now. I love having the space to spread out and work on projects too.
I do rather enjoy the garage as well. So much so I've nearly filled up. I recently picked up a used and abused Forester XT for a nice price that I've been working on. It needed a lot of work. New brakes, timing belt and associated hardware, plugs, and exhaust for starters. Still working on the exhaust as both catalytic converters are shot and expensive to replace if I want to stay with stock parts. I may just end up going the after market route. All this has lead me to become even more familiar with turbocharged horizontally opposed engines. That just makes it easier to work on an old Porsche at some point. I've also discovered I really enjoy heated seats, something I though absurdly frivolous before.
A lot has changed in the last few months and I've taken things with different degrees of grace or insanity. At this point I'd say 2/3rds of my coworkers can't stand to be around me. I don't fit in with adults very well. I've also been working on a few new things on the side that don't seem to be going anywhere fast at the moment, more on that later.
In this installment I'm going to go over some of the gear I use in the field, what it does and how I set it up. Please note: don't feel like you need to run out and buy all of what I have listed in this post! It gets expensive, fast. In reality all you need to get started is some sort of camera with manual controls and a decent tripod. I'm also only going to really go over the basics. Once you get deeper into astro work you'll probably want to add things like an auto-guider and acquisition laptop to the mix.
First things first, you do need a camera with manual controls. Shooting at night will severely confuse whatever metering system your camera has. Something with a tripod socket as well. Beyond that anything is fair game. APS-C, full frame, m4/3s, etc are all fine. It just depends on what type of imaging you want to do. Generally full frame is better for wide field while the smaller sensors do better with small telescopic objects. Personally I have both APS-C and full frame, eventually I'd like to add a few smaller sensors to the mix too.
If you're using some sort of camera with interchangeable lenses you'll need some glass to put on that mount. Most people assume that fast lenses are the best for astrophotography and run out, buy ultra-fast primes and shoot them wide open. That's all well and good for terrestrial subjects however stellar objects are a different story. You see, photographing bright points of light on a nearly perfectly black backdrop is a pretty hard task for any lens. Prime lenses are generally especially bad with distortion, chromatic aberration and field curvature when used at their widest aperture. There are a few exceptions, but by and large you're better off closing down a stop or two. This will help alleviate some of the nasty distortions and color fringing. I'm not saying prime lenses are no good for astro work, I use them all the time, just be aware that shooting your 50mm at f/1.4 won't give you good results. Below is an example of what kind of distortion you'll get out of a wide open fast prime. More than likely you'll need to manually focus your lens as well. Most modern lenses do not have a hard focus stop at infinity and the ones that do are sometimes a little off. I usually use Live View on my camera zoomed in on a star and focus the camera that way. You'll want to make the stars as pinpoint as possible and sometimes that'll be harder than others.
If you're using a longer lens (I usually notice it at 300mm at up) you'll probably see the stars sort of shaking on the screen as currents in the atmosphere move around. This is called seeing by astronomers. During a night with good seeing the stars will appear as pinpoints. Conversely on a night with bad seeing the stars will appear as smeared out disks and sometimes the turbulence in the atmosphere will cause them to shake in the view. The longer the focal length of the telescope or lens the more intensified the effects are. Sometimes on nights you can't really use a larger scope you can get away with some imaging at middle to wide focal lengths just fine.
Just doing star trails? Go pick up a solid tripod and ballhead. I like Induro as they seem to have the bang for buck ratio down. A wobbly $20 Wal-Mart wonder isn't going to work.
This brings us to tracking the night sky. Well, rather you'll be counteracting the motion of the Earth. There are a few options here. For the casual astrophotographer or someone looking for a lightweight solution for travel I like the iOptron SkyTracker. It's fairly inexpensive, small and comes with a polar alignment scope. The one limitation is its weight capacity. You're not going to be loading it down with a large lens or telescope. I've heard of folks using a DSLR and 70-200mm zoom with it before but I'd wager that's pushing it. Vixen makes a similar mount called the Polarie and I have a passing familiarity with it as well. As with most things the time spent in preparing is way more important than anything else, this includes aligning the mount. Most polar alignment scopes come with markings for aligning with different constellations in different times of the year. If your alignment isn't spot on you'll get some drift during exposures. It takes practice and some time so plan about 30 extra minutes or more for your first few outings.
For more serious astrophotographers, mainly those who want to use a big lens or a telescope, I recommend picking up some sort of equatorial mount. I own an Orion SkyView Pro GoTo and it will hold up to about an 8" Schmidt-Cassegrain, camera and supporting equipment. I generally like Orion's stuff, but Celestron makes decent mounts too. You can buy a equatorially mounted telescope and use the mount for general wide field work as most equatorial mounts use a dovetail mount that is fairly universal. I'd stay away from driven fork mounts or alt-az mounts as those suffer from field rotation during longer exposures. They're fine for visual use though. As with most things it seems most manufacturers overstate the load capacity of their equatorial platforms. As a safety measure I wouldn't try to load a mount with more than about 75% of it's rated capacity and that I feel is generous, 50% is probably a better number to work with. Loading it to 100% more than likely won't send it crashing to the ground but a strained mount isn't a very sturdy mounting. You'll end up with a lot of bad shots and trailing due to slipping and the motors struggling to keep up. Keep in mind these things are big, heavy and take a long time to setup or take down. You aren't going to be backpacking with most of these mounts. Get a smaller iOptron or Vixen for that.
Next up we'll talk about post processing.
Today marks the 45th anniversary of the Apollo 11 moon landing. In a time when our entire civilization was held hostage by two superpowers and our own nation was struggling with massive internal strife we did the impossible. Even in our most tumultuous times humans are capable of greatness. The irony in the whole thing is we used much of the very same technology that threatened our extinction in accomplishing the Apollo program. Proof that our ingenuity is often a double edged sword. Today we face other challenges created by our own cleverness. Hopefully we will use our time and energy wisely to overcome those problems.
Buzz Aldrin's boot print on the moon
In a few thousand years, long after the people talk about the Americans as we talk about the Romans, ancient Chinese dynasties or the Aztecs today, history books will talk about our civilization's legacy and those who did great things. The great men and women of our time will largely be a footnote. I'd wager that few would remember the Clintons, Bushs, Roosevelts, Kennedys, Lincolns or Obamas. Just as most people today can't name probably more than a few Roman emperors, senators or Chinese dynasties. However three names will be probably come to mind when their history books talk of the Americans: Armstrong, Aldrin and Collins. The photo of a single boot print on another world presented in the same chapter. A reminder of a time when we first escaped the bounds of our planetary cradle and started to take our place as explorers of the cosmos. Perhaps their history books will have other tales of similar experiences on other worlds by other humans. One can imagine.
I'll leave you with a video by Mr Reid Gower who has rehashed some of the words of Carl Sagan. I wasn't alive during the Apollo program so the video and descriptions of those who were is all I have to go on. However I think this video describes the experience well.
Astrophotography has become ever popular in the last decade or so as the equipment to do it well as plummeted in price. Now your average back yard astronomer with less than $3,500 or so in gear can take images that rival large imaging instruments from just a decade ago. You can even get some decent results with less expensive equipment and regular old camera lenses. Over the next few posts I'm going to go throw out there what I do to prepare, shoot, edit and finish my work. This is not the end-all-be-all guide to astrophotography, just a workflow that works for me.
Astrophotography seems easy on the outside but it is easily one of the most complicated, math intensive and taxing types of photography out there. It's not for the impatient. You can spend several hours over the course of one night shooting images and end up with squat on the other side due to poor planning or not understanding the subject. I'll go over some of the common pitfalls here too as I've made the same mistakes myself.
First off you'll need to know the basics of photography. Aperture, shutter speed, ISO, etc. If you don't know that like the back of your hand go figure it out and come back. I'm not trying to be rude, this blog post isn't going anywhere and you need to have a firm grounding. I also come from a math heavy background and will get into some numbers. Nothing most people can't handle but you should be warned.
Cue Jeopardy music ...
Back? Great! Now let's move on. The most basic type of astrophotography involves plunking your camera down on a tripod, grabbing a cable release and shooting some star trails. This is a great, simple way to get started.
All you really need to do star trails is a sturdy tripod, a camera with a cable release, some kind of lens and a few hours of time. That's another thing. A short exposure for most deep sky objects is around 30 seconds. Most of my images have a few exposures ranging up to 10-15 minutes. Dress warmly and bring something with you to occupy the time. A red headlight or flashlight helps too. The longer you leave the shutter open the longer the trails. You'll need an area with fairly dark skies and no light domes on the horizon otherwise you'll over expose and the sky will be a putrid yellow color. A slow ISO (100-400) and small aperture will help too. You'll probably need to do a couple of test shots when you first get going to figure out what you can do at your location.
Another way to dodge the over exposure is to do what's called image stacking. This is where you take a bunch of shorter exposures, say 30 seconds to a minute, in the field and stack them in post to create the star trail effect. For this you'll want to cut off your camera's long exposure noise reduction feature and take dark frames manually.
Say what now? A dark frame? What are you talking about? Glad you asked. If you've ever dug around in your camera's menus you've probably seen a feature called something like "Long Exposure NR," at least that's what NIkon calls it. It should be on by default. What this does is called dark subtraction. If you take a photo with a shutter speed longer than a few seconds the camera will take a dark frame of the same length of time and subtract that from the light frame. Why is this needed? CMOS and CCD sensors generate a lot of noise during long exposures. Some of the older cameras (I'm looking at you my old D200) had some nasty amp glow around the edges of the sensor as well. "OK," you might say, "but how does subtraction help us here?" Digital images are nothing but numbers. As far as the camera and your computer is concerned a RAW file is an array of RGB luminance values or just plain luminance if you have a monochrome sensor Mr/Ms Leica user. In a light frame these numbers are the values that the were read out of the photosites when the shutter closed plus whatever noise was present in the sensor at that time. For shorter exposures in good light this noise is safely ignored. You've got some much signal in the light frame that it's inconsequential. For longer exposures of fainter things this noise becomes a problem and can at times be brighter than the image (light striking the sensor) itself. Temperature can affect this noise as well, generally colder sensors are less noisy.
Thus enters our hero, the dark frame. A dark frame is 100% pure noise. Kind of like your Facebook feed. I'm just kidding Facebook friends, really, or am I? You can take one manually by setting your shutter speed and ISO then leaving the lens cap on so no light strikes the sensor. Aperture doesn't matter. Then you're just left the noise that was present in your sensor at the time of capture. Generally if you're taking many multiple exposures you'll want to manually do your dark frames and subtract them later in post. However, I tend to leave the in camera dark subtract (Long Exposure NR) on for single shots or short series of photos. Depending on your equipment you may need to take bias frames and flat frames. With CMOS based cameras bias frames are not needed as they have a built in circuit that takes care of that for you. It's still needed for CCD based cameras. Bias frames are taken in much the same way dark frames are, that is with no light hitting the sensor. However, bias frames are taken with an exposure time of zero or as close as your camera will get. Again, these are generally not needed for modern cameras as most have CMOS sensors. Flat frames are used to compensate for dust on the lens or sensor. Generally flat frames are taken by aiming the camera at a uniform color frame. Anything uniform in color will work, such as a daylight sky, as long as it's properly exposed and not blown out. For most wide filed astrophotography I don't do flats. Dust doesn't seem to really bother the image much. Plus it's pretty easy to keep lens elements clean. If you're using something like a Catadioptric Telescope (SCT or Mak) they are useful as the imaging surfaces are harder to get to. I consider any lens or telescope under 600mm to be a wide field.
Generally if I'm doing star trails, which I don't do very often, I'll do a bunch of 45-60 second exposures, a few dark frames at the end of the night then do the subtraction and stacking in post. However, there's nothing wrong with just doing a 30 minute exposure to test the waters. You just introduce a higher likely hood that something will go wrong and you'll lose the work.
In the next post I'll detail some of the gear I use in the field and some concepts for shooting.
Quake 3 is one of those games that has hung around for a very long time. Not only that but its engine has been used as the basis of a ton games since the early 2000s. Id software later released the engine under the GPL and this has made it an even more popular choice for developers. Long story short, this engine isn't going anywhere anytime soon.
Given its popularity, age, and simplicity it wasn't too surprising that folks are finding was to use it in a amplification attacks. It's happened to a few ioQuake3 servers I manage. At first I thought someone was trying to take my machine offline (I've had that happen too) but after some analysis I figured out what was going on. The attacks seem to come and go almost on a seasonal basis and have started cropping up again lately. In light of that I'd thought I'd share the resources I've been using to combat the problem.
Amplification attacks gained popularity in the DNS world. It's a very simple concept. The attacker spoofs their source IP address, makes a request to an unsuspecting server, and the server responds to the spoofed address. The attacker can quickly roll through a bunch of servers and redirect data to the target machine via the spoofed address. This is why it's called an amplification attack, because the attacker can gain much more throughput by combining traffic from several different sources than they could on their own connection.
Every now and again I see people using Quake 3 servers in these attacks. This includes Quake 3 Arena, ioQuake3, older COD games, etc. Anything based off the Quake 3 really. The attack is easy spot because you'll notice a disproportionate amount of traffic outbound on 27960 (or whatever port you're running the server on) compared to what's coming in. It'll jump out like a sore thumb in tcpdump, Wireshark or any other number of traffic monitoring tools. Attackers like using these games because they usually run on a standard port and are easily listed in game browsers. This allows them to dodge the suspicion that a port scan would bring them from their own ISP. In my experience it seems the attackers like to use the getstatus command as it returns a fairly large chunk of data, including a list of the connected clients. Generally in amplification attacks the attacking party will go with whatever will fire out the most data.
A quick diagram of a amplification attack ...
Right now it seems like the best you can do is mitigate these attacks. The ioQuake3 folks have added some code into their version of the engine that can help. However it doesn't completely prevent your server from being used in an amplification attack. If your server is running on Linux there's a solution over at RawShark's blog via iptables.
# Intial fitlering. Do a little limiting on getstatus requests directly in the input chain. iptables -A INPUT -p UDP -m length --length 42 -m recent --set --name getstatus_game iptables -A INPUT -p UDP -m string --algo bm --string "getstatus" -m recent --update --seconds 1 --hitcount 20 --name getstatus_game -j DROP # Quake 3 DDoS mitigaton table. iptables -N quake3_ddos # accept real client/player traffic iptables -A quake3_ddos -m u32 ! --u32 "0x1c=0xffffffff" -j ACCEPT # match "getstatus" queries and remember their address iptables -A quake3_ddos -m u32 --u32 "0x20=0x67657473&&0x24=0x74617475&&0x25&0xff=0x73" -m recent --name getstatus --set # drop packet if "hits" per "seconds" is reached # NOTE: if you run multiple servers on a single host, you will need to higher these limits # as otherwise you will block regular server queries, like Spider or QConnect # e.g. they will query all of your servers within a second to update the list iptables -A quake3_ddos -m recent --update --name getstatus --hitcount 5 --seconds 2 -j DROP # accept otherwise iptables -A quake3_ddos -j ACCEPT # finally insert the chain as the top most input filter iptables -I INPUT 1 -p udp --dport 27960 -j quake3_ddos
If you're running multiple servers on the same host replace the last line with the following:
iptables -I INPUT 1 -p udp --dports 27960,27961,27962 -j quake3_ddos
Again thanks to RawShark for that.
These rules will prevent your server from responding to these spoofed request. They do this by matching the request with the address it claims to be coming from and then dropping the traffic if it gets more than five hitcounts in two seconds. The initial filtering will look for a slightly different behavior directly in your input chain. With this in place your server will still get those bad requests but it won't respond to them. Usually the parties running the attack will give up in a matter of hours or a day if they don't get the desired response from your machine.
It can also be helpful to run your server on a non-standard port. While this often will just provide some security through obscurity it does seem to thwart the laziest folks in this type of scenario, which fortunately seems to be a lot of them. This has the disadvantage of having you server not show up in game browsers and a lot of public directories so it may be a no-go for some folks.
As an administrator of a Quake 3 server these attacks can range from annoying to serious business. If enough people are using your machine as an amplification point it can get you blacklisted by a number of service providers. In my experience the likely hood of it affecting your server directly is relatively low. However this is one of those "good internet citizen" things that is generally nice to keep an eye on so you aren't knocking your neighbor offline. Please be aware this doesn't "harden" your server or make things more s