2022 Favorite Photos

Despite 2022 being a difficult, if quiet, year for me personally and not really feeling the photography thing a whole lot I did manage to use a camera quite a bit. In no particular order here are a few photos I took that I like for one reason or the other or were noteworthy.


From June, we have synchronous fireflies around here and this way my attempt this summer in my backyard with the D850.


Also from June, some people on a rock face, Fuji X-H1.

Lunar Eclipse

This one isn’t very original or special but it got a lot of airtime on local and state news, the November 8th lunar eclipse, D850 this time as well.

RGB Light Portrait

Lastly, I have been sliding back into studio and portrait work this year but slowly. I picked up a pair of Godox LC-500Rs and have been using the RGB modes for some shots. I haven’t caught anything really super original this year on this front as it’s been slow and just sort of getting my feet back under me but I like the IRL split toning kind of look in this. Fuji X-H1 again.

Now time for some stats. This year I kept 4645 photos totalling 147GB. At least so far, I tend to do a year end cull in January so that number will likely head down. Specific camera stats below. Note this won't add up to the total due to some of those being from scanned in film, silly ancient floppy disk cameras that don't have modern metadata tags, borrowed cameras and so forth:

Camera Number of Photos
Nikon D850 1496
Nikon D7500 573
Nikon D800 183
Nikon D1X 53
Nikon D3s 37
Fuji X-H1 1134
Fuji X-T2 609
Fuji x100F 560

All data taken from Darktable. Look, don't judge me. I know I have a problem. It's only hording if you don't use it right?

295 photos had five stars and were edited. I only use one or five star ratings, it's either good or it's not.

My most used focal length was 23mm. I suspect that's because I use the x100f a lot and the Fuji 23mm prime often on the other two.

Age is not good proxy for technological illiteracy

Now that I'm approaching 40 in a few years the trend of equating age with technological illiteracy has really started to bug me. Most recently I've seen a few articles with respect to Congress, their advanced average age, and relying on that as a crutch to explain away their ineptitude with all things technical. It's a common trope and one that I've bought into in the past but the more experience I get the more I see it for the meme it is. Congress is bad at technology policy because of corruption, lobbies, and willful ignorance. They'd likely be very nearly this bad if the average age was 35 in my opinion.

First, I've met a fair share of Gen Z kids who I'd call incompetent with understanding the tools they use. Honestly, I'm a bit surprised that with all the screen time that kids born after the year 2000 had just how bad many of them are with basic concepts like a directory tree or just understanding what different pieces of hardware do and how they interact. Granted some of this is "standing on the shoulders of giants" in that using technology today is just a lot easier than it was 30 years ago when I was getting started. I had to know Hayes modem codes, IRQs, and TSRs. They press a button on their router for WPS or have to enter a WPA2 password in a worst case scenario. However the number of "tech geeks" in the younger generation seems rather proportional to what it was in my day despite the greater exposure to technology Gen Z has had. There are plenty of experts in Gen Z it's just not everyone under 25 years old.

Working in a university gives one exposure to a good mix of age groups as well. I know plenty of people 45 and older who know more than I do. In fact I've known a few rather tech savvy retirees who kept up with things nicely just as a hobby. There's definitely a slow down effect as one ages, I can tell you I don't pick up new languages as fast as I used to, but experience is nothing to be ignored either. Ageism is pretty rampant in the technology field overall but that's a different rant.

Technology as a subject is rather broad and deep as well. My area of expertise is generally more low level physical things. Hardware, networking, ASM, C, some Python and virtualization. For example I understand how a state machine and an ALU work but give me some higher level abstract software engineering topic and I'm not really going to follow. Likewise I've met a few brilliant machine learning experts that didn't know the difference between RAM, cache and physical storage. I don't think either of us should be called technologically illiterate just somewhat specialized. Likewise I'm not an avid smartphone user (my first gen iPhone SE still works just fine thank you) so outside of cutting most of the garbage off I don't use on it I'm really not up on that side of the world. My battery literally last days as that's how little I use it. However just because a younger person can run circles around me there doesn't mean I'm now ready to be put out to pasture or somehow technologically illiterate. Honestly, there I've just drawn a line as to where I think that tool's trade-offs not worth integrating it in my life more.

I also find large chunks of the younger generation either largely resigned to or ill-informed on the endangered species that is privacy these days and frankly that seems to be a problem at both extremes of the age spectrum. A lot of the 60+ crowd does not understand it either. I think Gen X and older Millennial are probably the most aware of what big tech and government are interested in and why it's a problem. Also, just how insidious and inside your mind social media can be. Just because a Google or Instagram and flying the flag of your pet cause does not mean they're you're friend. They just want you to hand over more that 21st century version of crude oil, data, and will tell you whatever you want to hear to get it. I don't know why, maybe it was the inherent cynicism 80s and early 90s kids grew up in but that chunk seems to have a healthy skepticism of living online overall. The point to drive home here is that you're not using these services, they're using you.

This isn't meant to be a "kids these days" rant. I also acknowledge that specific skills are different across generations. Gen Z grew up with things I did not. I just think equating younger age with greater understanding of the tools they are using is a bad move. Look at someone's credentials not their birth year before you judge them. Age is a bad proxy for "knows what they're talking about" and some of us geezers may surprise you. Likewise I know plenty of 20-somethings I respect and would call my peers. Age has little to do with it and we shouldn't overly preference one group over the other based on faulty assumptions. Ignorance knows no generational bounds and neither does curiosity.

Recent Sunrises

Fedora GNOME RAW Thumbnails

I've been trying to give GNOME on my desktop a fair shake the last few months as I like the way it works on my XPS 13. I've run into a few problems here and there but lacking RAW image previews in the file manager is quite the oversight in 2021. Dolphin and even Thunar do it by default in KDE Plasma and XFCE respectively. I usually run Fedora with the KDE Plasma environment on my desktop machine but the last time I tried GNOME 3 a number of years ago I seem to recall RAW thumbnails existing? Perhaps not, but here's how you fix that:

GNOME RAW Thumbnails

Fixing this oversight is not to terrible but does require some fiddling. First you'll need the ufraw package. Install that through the GUI Software center or through dnf:

dnf install ufraw

It'll bring a couple of dependencies with it but not too much bloat.

Next you'll need to create a file at /usr/share/thumbnailers/ufraw.thumbnailer and add the following to it:

[Thumbnailer Entry]
Exec=/usr/bin/ufraw-batch --embedded-image --out-type=png --size=%s %u --overwrite --silent --output=%o

Close out of Nautilus and do an rm -rf ~/.cache/thumbnails/* then open a directory with RAW images. You should now see RAW photos as above. Hooray!

Granted I think Nautilus is one of GNOME's weak points, especially compared to what's available in other Linux desktop environments. Nautilus seems neglected and at times it even makes me miss the notoriously bare-bones Mac OS Finder. GNOME excels in a number of areas this makes the mediocrity of the file manager stick out. Split windows like Dolphin and more view mode options instead of just "icons" and "list" would be a good start.

RAW thumbnailing really should be in the base install of any desktop distribution using GNOME. It's in literally everything else (except Windows 10 apparently) and has been for years. If you're running GNOME you're probably looking for more of a modern desktop experience instead of a lean and slim install anyway so a few more megabytes of software and a configuration file won't make that much more difference.

Peak Linux Desktop

Linux Desktop

We are in the peak Linux desktop era and it might be downhill for a while. When I say peak Linux desktop era I mean you can pick up nearly any machine off the shelf, install Fedora, Ubuntu, Mint, Debian or Manjaro as they are and get straight to work without fiddling with the computer itself. There are some corner cases where things are more difficult but even nVidia PRIME works well these days. In my opinion the Linux desktop is basically at parity with the proprietary options in terms of "just works." You can boot the machine up, do the initial create your account and sign into your cloud stuff and boom you're done just like any other desktop OS. The image of Linux being about messing with the computer is really outdated. It's a good productivity tool, as much so as Windows or macOS in my opinion. Yes it's different and takes some adjustment to the new workflow, just as if one were to switch from Windows to Mac or vice-versa, but it's not inherently broken. No you don't have to touch the CLI for anything on the major desktop environments if you don't want to.

A lot of this is due to monumental efforts on the part of volunteers but also companies like Red Hat and Canonical. However there are two major players who have contributed huge amounts of code, time and funding to the Linux ecosystem that people seem to forget about: AMD and Intel. Intel is a major part of the reason why modern USB standards work on Linux and AMD has documented and open sourced their graphics drivers. Intel and AMD pay developers to work on this, upstream code into the kernel and have generally been decent community members. Even with all their flaws we owe them some thanks for Linux hardware support being where it is along with the fleet of volunteers maintaining things at organizations like freedesktop.org.

Late last year Apple released ARM based Macs. Microsoft has just announced a translation layer to allow X64 code to run on ARM on Windows and has shipped ARM machines for years. Many OEMs have been shipping ARM Chromebooks for a while too. The Apple announcement came with their usual showmanship and is going to make the rest of the world take notice. Like everything else Apple does the rest of the industry will be tripping over themselves to follow suite. I wouldn't want to be Intel or AMD right now or be holding their stock. ARM is likely the future for most devices outside of enthusiast desktops or legacy applications. I think that even enthusiast desktop platforms will switch over, but that's just my opinion. Yes I realize Chromebooks are technically Linux as is Android. This is more about the FreeDesktop type of future. Chromebooks and smart phones are still very locked down devices and do not support the full range of what the Linux desktop today can.

This brings us back to peak Linux desktop. Linux runs on armhf and arm64 just fine. The problem is the ARM ecosystem is a mess of cobbled together proprietary things. Even if you aren't dealing with a locked bootloader power management, boot processes, IO and graphics varies tremendously from one ARM platform to the other. Unlike X86 and X64 where UEFI, ACPI and other well documented standards are implemented across nearly the entire range. There are very few ARM vendors working on open sourcing and upstreaming support for their platforms into the Linux kernel either. Even the beloved Raspberry Pi relies on out-of-tree patches for full support. Most of the other consumer facing ARM boards out there for Linux rely on reverse engineering in some part or the other, even if just for the Mali graphics.

There is hope, a standard called ARM ServerReady mandates UEFI and ACPI for compliance. This solves the boot process and power management problem. Despite the data center centric name this standard works fine on desktop and laptops as well. Microsoft's Surface products used ARM ServerReady as does the ARM based Lenovo Yoga. Tyan and Gigabyte have been selling a range of ARM servers that fully support it as well. Indeed you can grab an arm64 Debian installer and slap it right on the Tyan and Gigabyte machines. They work great as long as you don't need graphics.

Alas, this only solves part of the problem though as you're still dealing with a lack of driver support. Mali graphics have a reverse engineered driver that has been mostly accepted into the kernel the last time I checked but like most reverse engineered things it's usable not not fully featured. A very similar feeling to the Nouveau driver for nVidia cards. Most ARM options for Linux on the desktop or mobile right now are low performance patchwork things like this.

What we need is an ARM vendor to step up like Intel and AMD have and work hard on upstreaming support into the Linux kernel for all their hardware and ServerReady to become the default ARM platform. Huawei is probably the closest on this as the Chinese tech sector is ditching western software firms in favor Deepin and Ubuntu Linux. I think a lot of the FUD about Huawei hardware is just US government sabre rattling. I've seen no proof of it and honestly these days it's pick your backdoor if you're using anything from a Five Eyes country anyway. Until someone starts working hard on upstreaming support and ServerReady becomes the defacto standard Linux on ARM is going to be a mess and will revert back to hobbyist only territory for the desktop. I've been using Linux on and off since the late '90s and early '00s. I remember the good-old-bad-old days before Intel and AMD got onboard and I don't want to go back. That's where the "4 hours and 6 kernel recompiles to get your network card to connect" meme came from. I've got other things to get done now and cannot spend that kind of time minding my machine.

Don't get me wrong, I don't think the Linux desktop is going anywhere. But I think the mid-future is more like the Raspberry Pi or PineBook Pro and less like a ThinkPad or XPS with Fedora, Mint or Ubuntu on it. It will be a tool for people to create one off projects on, not the robust desktop we have today. There may be a open, performant, upstreamed and widely available ARM platform for Linux in the future but I think in the meantime were in for a decade of pain. Again, I could be wrong, that happened once before in the 80s.

CentOS, Fedora and Trust

Last week Red Hat made an earth-shaking announcement and ended CentOS 8 almost nine years ahead of schedule. Understandingly this caused quite the stir in the Linux community. CentOS started off life as a community rebuild of RHEL as Red Hat released their source to comply with the GPL. It existed independent of Red Hat until 2014 and was understandably quite popular. There were several community rebuilds of RHEL back then but CentOS was the survivor, Scientific Linux was the other major player that folded up shop last year. Even if you paid for RHEL you probably used CentOS in testing or development environments. Many RHEL admins trained and learned on CentOS too. I remember when Red Hat acquired the CentOS project in 2014 there was a lot of hand wringing over what this meant as CentOS cut directly into Red Hat's bottom line. Once again these community worries came back up in 2018 when IBM acquired Red Hat. Although it took six years it seems many of the doubters were proven correct last week.

I'm not a huge user of RHEL or CentOS user myself. We use RHEL and CentOS in the office in some places we are required to but for the most part my infrastructure pieces are Debian. Among many other things I like the governance and structure of the Debian project much more along with the "universal operating system" approach they take. I've run Debian as a desktop as well and it works just fine in that situation too and has become easier to do with backports and testing actually becoming usable and getting more timely security patches. Debian stable is probably my favorite server operating system to work with. So, the direct impacts of CentOS being taken out back and shot are minimal on my day-to-day life. However, since 2016 or so I've been using Fedora as my desktop operating system. I'm not a fan of Ubuntu for a few reasons and Fedora had what I thought was pretty sane defaults, handled the HiDPi screen on my laptop better and flat out looked the best. I mostly ran the KDE Plasma spin at the time but over the last 8-12 months or so I've even adopted GNOME on my laptop. Fedora is the project that CentOS and RHEL are ultimately derived from and Red Hat/IBM has a controlling stake in the Fedora Project. I'm also a pretty big Ansible user and have been since before they were bought out by Red Hat.

This is where I become concerned. Two of the major projects I use on a daily basis, Fedora and Ansible, are owned and controlled by a company that just threw the on part of the Linux community under the bus. Yeah, I understand that they have a profit center to protect and CentOS was directly impacting that. Part of it was certainly bad communication before hand that CentOS was a "best effort" product and not really guaranteed, but cutting CentOS 8 off in 2021 when many people were counting on it until 2029 is brutal, no two ways about it.

Right now Fedora is likely safe as is the free version of Ansible. I can't say I'm entirely comfortable with Red Hat having such a large stake in Fedora now. Fedora makes a big deal about being community driven but Red Hat still has a controlling stake in the project and puts up a lot of infrastructure for it. On paper Fedora is independent but in practice it's entirely reliant on Red Hat and Red Hat has a substantial driving force in the project. What does this mean going forward? Will Red Hat or IBM start wanting telemetry or some other dubious thing implemented in Fedora? I'm not saying they will but I think a lot of free software users and fans would feel a lot better with a more independent Fedora. Even if it means they need to have NPR or PBS style pledge drives every year to pay for infrastructure I think it would be preferable to having so much reliance on Red Hat and IBM. I'd gladly chuck some money at a Fedora Foundation if it meant they could tell IBM to pound sand if the community thought something from corporate was a bad move.

Red Hat likes to crow a lot about open source and community but this move has burned a lot of bridges. In the short term I suspect they'll net a few more RHEL subscriptions from those who can afford to convert over. But the memory of the community is long and this move may burn them in the long term, trust isn't something that's earned easily as most of us are here because we don't trust closed software vendors like Apple and Microsoft. Right now I've switched back to Debian on my laptop and I may move my workstation back to it in the coming weeks but we will see. For now it's still on Fedora 33.

Debian isn't perfect but it's at least free from a lot of corporate meddling and is a more truly community driven project. I mean, they even have package on anarchism in their distro.

I don't mean this as a knock against the fine folks at the Fedora Project. It's a great piece of software and I still say it's the best "get it installed and get to work" type Linux out there. I still highly recommend it to anyone looking to start out in Linux, Debian isn't nearly as easy to get started with. Fedora also doesn't shove proprietary package managers in behind your back like Ubuntu. I really care about the direction of and enjoy the Fedora Project as a whole. I just don't trust the organization paying your power bills at the moment.

ArgyllCMS Display Calibration

Most photo and graphics people are probably familiar with DisplayCal as a monitor calibration tool. It's more thorough than the software that ships with Xrite devices, even if you're using one of their supported platforms like macOS or Windows. Unfortunately DisplayCal's GUI relies on Python 2 which is end-of-life and is being removed from a number of Linux distributions. Fortunately the GUI is just a front end for Argyll CMS and it is quite easy to calibrate a monitor with just Argyll installed. The command is as follows:

ArgllCMS Calibration

dispcal -d 1 -v -P 0.5,0.5,3.0 -o 9300_Internal_Display

The options are as follows:

-d 1: number of your display, if you have multiple displays this will be 1,2,3.. etc. Just running dispcal with no options will show what displays get what number. If you have just one display just do -d 1.

-v: verbose output

-P: screen posistion and scale, useful for HiDPI displays, options are X-position, Y-postion, scale. Here I have it in the middle of the screen (0.5,0.5) at 3.0X scale.

-o: Output file to save the profile to.

If you don't have a HiPDI, also called a retina display by Apple, the -P option might not be needed. I use it because without scaling the color square displaycal creates it is too small to cover the colorimeter. The software gives you the option to make some display adjustments before calibrating. If you're on a laptop or another display without RGB adjustments you can just press 7 to just straight to calibration and let dispcal do its thing.

Once the ICC profile is created you can import it into your desktop environment's color correction tool and apply it to the correct display.

Micca OriGain AD250

I recently upgraded my desktop audio, for about the last fifteen years I used a 5.1 setup with a Logitech Z-5500 which have been OK. Nowadays I do more listening to music than anything else where stereo is more important than surround, not to mention the reclaimed desk real estate. I have a set of Sony bookshelf speakers laying around in storage so I went looking for a desktop amp. My requirements were an optical input with USB or analog inputs for secondary devices. Bluetooth wasn't a requirement as I can use PulseAudio on Linux to turn my desktop or laptop into a Bluetooth receiver but it would have been nice to have built in. I'm also a stickler for physical buttons and switches for switching inputs.


The Micca OriGain AD250 was still available from the usual online places even after stocks of other amps dried up due to COVID related importation problems. My other options were a couple of models of SMSL amps but I honestly like the looks of the Micca better. It's 50W on each channel which is more than enough for a desktop setup and has very simple forward facing controls.


AD250 Back Panel

My desktop machines have had optical out for years and I always use it. It's a nice way to bypass the onboard sound card that is usually mediocre at best and may have proprietary features or codecs that do not work fully in Linux. Optical is becoming far less common on laptops though so I wanted something with wither USB or analog in for times when I need it. Plus I still have an iPod or two laying around.

The AD250 is a compact and good looking box. I like having a physical volume control and the switches for changing inputs are nice and great to use. The power brick is a bit large and you'll want to relocated it off the desk.

Overall it sounds good and the output is well suited for a desktop type scenario. I wouldn't expect this to fill a large room.

The major downside has been the popping when using the optical input. It's like a power on pop and I thought it had to do with the power saving functionality on modern motherboards powering down the LED on the optical output. It's pretty easy to disable this feature in Linux:

options snd_hda_intel power_save=0 power_save_controller=N

That should handle it on most onboard sound cards. It still happens less frequently and it may be either a power supply issue on the Micca or the circuit design around the amp. The fact that it doesn't do it on the analog input makes me think it's the Micca's power supply or perhaps the amp lacks a delay circuit. There's a slight chance it's in my Sony SS-B1000 speakers as they are 8 Ohm and that's right at the upper end of what the AD250 will drive. I've tried numerous optical cables and no dice, still pops. I’ve noticed it some one the USB input as well, just not as bad. Unless I can get the popping solved that may end up being a deal breaker for my use. Maybe a revision two of this amp will fix the optical port. I've read some comments from elsewhere on the internet that the optical input noise is not isolated to just my amp so it seems to be either a power supply issue, filter issue or some other design problem with the AD250.

It’s most annoying when you’re scrolling through a web page with many auto playing media sources, for whatever reason even if you have the audio muted each video clip that starts up causes the amp to wake up and pop

Overall I like the look, feel, feature set and user interface of the amp but for whats a $100 desktop amplifier I find the popping to be annoying. From what I came across in some internet searches the popping is not a unique problem to my copy of the amp either so there seems to be some quality control issue with these or some engineering or component issue. I do use a splitter on my desktop's optical output as I also send it out to an OriGen G2 for my headphones and neither of these devices seem to like the active splitter I have. Once the machine powers on they will just blast static at you if the optical input is still selected. I switched to a passive splitter and they behave much better. For now I plan on hanging on to the AD250 and seeing if I can figure out what the deal is with the popping. I have a 2015 MacBook Pro with an optical output as well that I may drag out of the pile to see if it pops with that machine. Could still be some driver or power save feature on my desktop causing it. The old Logitech system didn't have these problems with either machine.

For now I can say if you're using the USB input or the analog input this is a great little desktop amp with an awesome user interface. Just flip the switch to your input and turn the knob on for volume. No remote, no digital display to break just simple, classic and solid which suites my tastes quite well. It drives reasonably sized bookshelf or desktop speakers just fine and sounds very detailed I think. If you're a basshead it's probably not the amp for you. The optical input is still suspect though in my opinion but chances are most people aren't using those in 2020 so that may not be a deal breaker for you. Since I'm using the USB input on my docked laptop and the optical on my desktop I really need both digital inputs though.

AMDGPU-PRO OpenCL on Fedora and Debian

Since the open source AMD GPU Linux drivers are now quite good I swapped my GTX 970 from my old machine for a Vega 56 in the new Threadripper build. Unforuntely the kernel and mesa drivers do not support OpenCL, I tried ROCm for a while but only one build of version 1.2 would work with Fedora 30 and it would sometime casuse kernel panic when in use with Darktable. Not to mention some strange image artifacts when using certain modules.

AMD does have a proprietary closed source driver for Linux but it only supports a very small set of distributions and their checks on the installer are very strict. So there's no just running the installer script or installing the RPMs or DEBs. Plus there's the fact that I just need the OpenCL portions of the driver and not the display portions. You can continue to use the open source kernel drivers for display, OpenGL and Vulkan which is preferable as they outperform the proprietary AMD drivers. Just to reiterate: this is not necessary for 4D acceleration and gaming. The AMDGPU-PRO proprietary drivers are needed for OpenCL and compute only. If you're just looking to play games or run Steam the open source mesa implementation shipping in up-to-date distributions these days is more than good enough.

Fortunately there are just a few packages needed from the proprietary AMD driver for OpenCL to work and it is completely decoupled form the display driver. Start off by downloading the admgpu-pro package for the latest version of Ubuntu. This will work for Debian or Fedora as you're just going to be extracting the .deb packages anyway. You can do the same thing with RPMs and rpm2cpio but it's just a bit more troublesome. dpkg is available on Fedora anyway so it's no big deal.

At any rate you'll need the following packages:


The version numbers will vary depending on when you read this and download your package. First extract the tarball:

tar vxf amdgpu-pro-19.20-812932-ubuntu-18.04.tar.xz

Move the DEB packages out that you to a seperate directory

dpkg-deb -x opencllibdrm-amdgpu-amdgpu1_2.4.97-812932_amd64.deb opencl_root
dpkg-deb -x libdrm-amdgpu-common_1.0.0-812932_all.deb opencl_root
dpkg-deb -x opencl-amdgpu-pro-icd_19.20-812932_amd64.deb opencl_root
dpkg-deb -x opencl-orca-amdgpu-pro-icd_19.20-812932_amd64.deb opencl_root
dpkg-deb -x libopencl1-amdgpu-pro_19.20-812932_amd64.deb opencl_root

Change directory to where you extracted those files:

cd opencl_root

Copy the necessary files into place:

sudo cp etc/OpenCL/vendors/* /etc/OpenCL/vendors/
sudo cp -R opt/amdgpu* /opt/.

Now the dynamic linker needs to be updated so it knows where the libraries are located. In /etc/ld.so.conf.d/ create two files and put the following lines in them:





Then run ldconfig:

sudo ldconfig

Test and see if it works with clinfo -l, if it's working you'll see something like this:

clinfo -l 
Platform #0: AMD Accelerated Parallel Processing
 `-- Device #0: gfx900
Platform #1: Clover
 `-- Device #0: Radeon RX Vega (VEGA10, DRM 3.30.0, 5.1.20-300.fc30.x86_64, LLVM 8.0.0)

Darktable should also allow OpenCL now, you may need to delete the pre-compliled OpenCL kernels in ~/.caches/darktable if you were using ROCm before.

Darktable OpenCL

While this isn't the perfect option in terms of free software it's still preferable to nVidia's driver in my opinion. Hopefully ROCm will become more stable and will be packaged by Debian and Fedora in the near future, which seems likely given AMD has blessed ROCm as their future compute solution.

Linux Desktop Update

It's been a few years since I've become full time Linux user for my photo and media work flow. As we're now in the sixth month of 2019 I thought it'd be a good time to do a quick update and report on how things are going. When I decided to move off OS X for my photography and video work in 2015 the landscape was quite different. I was a seasoned Linux user and admin at the time but had kept Macs around for access to Adobe and Apple programs. But things change, the libre software options were getting better and I was mostly tired of giving Adobe money and to a lesser extent Apple. Keep in mind this was before Apple released the unreliable, throttling, terrible keyboard having and consumable current generation MacBook Pro. If I wasn't set on switching before then those things alone would have sealed the deal. Before then I was less hostile toward Apple's product line.

Fedora KDE Desktop

I still have a 2015 MacBook Pro in the house but I mostly use it for two things: my Canon Pro 100 printer and Epson V600 scanner. I have a what is now ancient copy of Photoshop CS6 and Lightroom 5.7 on it that haven't seen any use in a very long time either. My primary machines these days are an AMD Threadripper desktop and a Dell Precision laptop, both have only Fedora Linux installed on them. No dual booting, no Windows 10 virtual machines, no “cheating” if you could call it that. Honestly don’t think I could be all that useful in a Windows environment these days.

Just laying my cards here on the table from the beginning so no one see a Mac in my presence and thinks I’m making this all up.

Now then, in the intervening years the Linux desktop has improved dramatically. KDE Plasma has become my go to desktop environment for two main reasons. First of all when I picked up the Precision I needed something that did HiDPi scaling reasonably well. This forced me off XFCE and got me comparing GNOME3 and KDE Plasma 5 desktop environments. I know a lot of people like i3 Gaps or MATE or whatever but IMO there's a lot of good reasons to stick with the major players if you're looking to just get things done. Plus if I was going to show normal people how "full featured" the Linux desktop had become some sort of super tweaky desktop made for posting screenshots to /g/ or Reddit was out of the question. My wife should be able to pick up my machine and use with without much fuss. Plasma and GNOME look and act like modern desktops so that's the route I went. I ultimately decided on KDE Plasma 5 since it supported fractional scaling and my Precision looks best at 1.5-1.6x. GNOME3 on the other hand has a stinky foot for a mascot and could only do full integer scaling at the time. I think they've added the fractional scaling as a test feature in one of the latest releases however. Not to mention the difference in resource usage between the two. I really wanted to like GNOME3 since they did something actually brave and different with the desktop interface but it was lacking too many features, broke a lot and ate CPU and RAM like crazy. Plasma 5 on the other hand runs on everything from my super old Latitude E4200 with 3GB of RAM to my Threadripper with 64GB.

The KDE folks have really been hitting it out of the park the last few years with their releases of the Plasmas desktop, as long as you don’t need any accessibility features. GNOME3 is still the only desktop doing serious work on that front. The integrated applications for KDE are a different story too and tend to vary on quality. Gwenview has been great and Okular is the best PDF application out there today IMO. Konsole has become my favorite terminal emulator as well. However KMail is an unusable train wreck. I imagine the resources put into it aren't the greatest as most Linux users are probably going to use Thunderbird, mutt, alpine or a webmail interface.

On the distribution side of things I've mostly stuck to Fedora and Debian. I've been a long time Debian user as it covers a lot of use cases very well, is extremely stable and I'm rather fond of their governance structure. Plus it's dead simple to move from one release to the next. Fedora has made some significant strides in this direction lately and like their "cutting edge adjacent" strategy in terms of software versions. The last time I used Fedora for anything was the Fedora Core 4 through about the Fedora 8 days. After then it went through some shakey periods in terms of stability and usability but nowadays I'd say it's easier to get up and going with than Ubuntu. Fedora's KDE spin is quite nice as well, despite being known as "the GNOME distro." My go to machines for photo and video work are all running Fedora right now and I have a couple of desktop and laptops on Debian, all of my infrastructure stuff runs Debian too. IMO you can't really go wrong with either and if you want to get started with Linux I'd try Fedora, especially if you're a first time Linux user. I'm not a fan of derivative distributions and I've always found some of Ubuntu's choices to be strange, just skip those and go straight for Debian Stable IMO.

Darktable 2.6

On the less day to day bread and butter desktop stuff and more image work flow side I'm continually impressed with Darktable. I don't miss Lightroom in the slightest. I started moving over to Fuji about the same time as I moved to Darktable and it has handled the RAW files nicely. From what I understand until recently Lightroom struggled with the RAF format and making effective use of X-trans, Iridient Developer seems to be more popular with Fuji users on Mac and Windows but I find Darktable’s RAF conversion to be quite fantastic. Indeed others seem to agree with my sentiment. Even if you’re stuck using a Mac or Windows machine I’d suggest trying out Darktable for Fuji RAW conversion. For the rest of the digital asset management workflow Darktable has been more than adequate, it does have more of a learning curve than Lightroom but it also allows for more under-the-hood exploration with modules like equalizer. I tend to be pretty self reliant on the organizing files front on my disks though and have heard other’s coming from things like Aperture and iPhoto saying Darktable doesn’t do enough on that front. The developers have stated time and again they aren’t writing a file manager and there are other better solutions out there for it. Personally I think it’s fine, but if you’re used to just dumping you files at a library management program and letting it handle the files on disk it will be an adjustment. Honestly I don’t like that approach as it locks your organizational structure into that piece of software and I’d rather just have the files available to me in a directory structure to move about as I see fit.

The GIMP has changed some since I first moved over, but in general it’s slower moving than most software package development cycles these days. There seems to be two camps for GIMP users: it’s adequate or it’s a piece of junk. Where you land largely seems to be dependent on what you’re trying to do and in my opinion photographers are better served in the GIMP than designers. Most people I see complaining about it lacking features are designers and print production types. Which is fair, GIMP does lack CMYK mode among other things and there is an adjustment to be made coming form Adobe land. For my needs it still gets the job done especially now that 16-bit and 32-bit images are supported in 2.10. Before then I was running the unstable testing branch as 2.8 only supported 8-bit images. If you only working with JPEGs this isn't a big deal. It only matters when you're working with TIFFs and RAW file derivatives. If all you need is minor retouching there's no reason to use Photoshop, unless you need a specific feature or work with others in a Adobe centric environment. But for most of us just working with ourselves out here GIMP seems to get the job done.


Video editing is something that has changed massively since 2015. When I first moved my media production efforts to Linux there wasn’t a good option for a non-linear editor so I dual booted and used Sony Vegas or just got out the MacBook Pro for iMovie, Final Cut or QuickTime. That is no longer the case. Kdenlive has made massive strives and recently did a huge refactoring release that squished a lot of long standing odd behaviors, but I’ve been using it since 2017 without many complaints. If you’re used to old-school iMovie and Final Cut the Kdenlive is pretty easy to move around in. I’ve heard some folks say it’s similar to Adobe Premiere as well.

I am still glad I made the complete switchover, even before this I was fine with Linux as a desktop operating system. However now I have moved my comfort zone with it and I’m no longer at the whims of a couple of rather controlling companies for the creative part of my life. I really don’t like feeding anti-competitive, monopolistic, end-user hostile companies trying to squeeze blood from a stone with monthly subscriptions or engineered to fail overly delicate status symbols.

I’ve also come to despise the term “industry standard” as it seems to just be a good excuse to not try to move outside the box you were taught to stay in. “You get what you pay for” is another terrible motto and seems to be used by these corporate types to put down libre software alternatives on a regular basis, but I’m off on a tangent at this point but I’d suggest trying some of these tools. There are even more applications I have not covered here like Krita and Raw Therapee, both of which I've been using lately as well. It really is about the best time it’s even been to jettison the likes of Adobe, Apple, Google or Microsoft these days. Depending on what exactly your need is and what products you wish to avoid. In the case of most of us just working on creative photo and video fields by ourselves or for ourselves it’s really just about overcoming the inertia that Adobe has in this space.

Even professionally it's more than possible, but like switching camera systems there is an initial time cost and you have to decide if the benefits are worth it. In my opinion the freedom is worth trouble and my work hasn't suffered for it. Just don't let others saying "oh, that's not for anyone doing serious work" discourage you. I just don't think that being on a corporate leash is the only way to get things done.

Page 1 / 6 >>>