Mar 1, 2014 - Port knocking with single packet authorization

Comments

UPDATE: 2016-02-03 - Update firewall rules section.

A few weeks ago I discovered fwknop which is a very clever mechanism to secure services. I’m using this so I can ssh into a Linux server on my home network without opening the sshd port up to the world.

Single packet authorization works by sending a single, encrypted UDP packet to a remote system. The packet is never ACKd or replied to, but if it’s validated by the remote system, then it uses iptables to temporarily open up the service port for access (the filter is limited to the client’s IP address). If the packet isn’t valid, it is simply ignored. In either case, to an external observer the packet appears to go into a black hole. After a user-configurable amount of time (30 seconds by default), the service port is closed, but stateful iptable rules keep existing connections active.

This is really great because all ports to my home IP address appear, from the internet, to be black holes - my router firewall drops all incoming packets, and the specific ports open for fwknop are dropped via iptables on my Linux server.

Configuring this solution isn’t too difficult if you are familiar with networking and Linux network and system administration, but it can be a bit tricky to test.

Server Configuration

There are four areas that need to be configured on the server-side:

  • Fwknop needs to be configured with appropriate ports and security keys
  • iptables policy needs to be created for each service port
  • Services need to listen on appropriate ports
  • Router firewall needs to forward fwknop and service ports to the server

My per-service iptables policies are done via iptables-restore (and ip6tables-restore) and the relevant bits look like:

*filter
-A INPUT -j FWKNOP_INPUT
-A INPUT -i eth0 -p tcp -m tcp --dport 54321 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 54321 -j DROP
-A INPUT -i eth0 -p udp -m udp --dport 12345 -j DROP
COMMIT

Note the order of rules is important. Make sure that the FWKNOP_INPUT rule is before the port-specific rules. Likewise, make sure the ESTABLISHED,RELATED rule for each service is before the DROP rule for that service port. The last rule is subtle. fwknopd does not bind to the SPA socket port - it transparently sniffs for UDP traffic, hence we drop traffic in keeping with the rest of my general firewall rules to blackhole all inbound traffic.

Before you start fwknop, and open up ports on your firewall, don’t forget to make sure these rules are in place. In that case, if you create these rules manually, use rule order “1” instead of “2”, as you are creating the rules before fwknop has added it’s rule.

/etc/fwknop/fwknopd.conf excerpt from the server:

PCAP_FILTER                 udp port 12345;

On my debian testing/Jessie server I also had to add this line to fwknopd.conf:

PCAP_DISPATCH_COUNT            1;

/etc/fwknop/access.conf excerpt from the server:

SOURCE                    ANY
REQUIRE_SOURCE_ADDRESS    Y
KEY_BASE64                SOME_BASE64_ENCODED_KEY
HMAC_KEY_BASE64           SOME_BASE64_ENCODED_HMAC_KEY

I didn’t use the default port as a bit of added measure, and additionally I’m running sshd on a different port as well, as a small added bit of security.

Adding an additional port in sshd is really simple, just add an additional Port line and restart sshd:

Port 22
Port 54321

Only port 54321 is port forwarded on the router, but I can still use port 22 while on my home network.

Client Configuration

On the client, I have a simple script that:

  • Sends the authorization packet via the fwknop client
  • ssh’s into my server on the configured sshd port

The script looks something like:

fwknop my.host.fqdn
ssh -p 54321 my.host.fqdn

Excerpt from .fwknoprc on the client:

[my.host.fqdn]
ACCESS                      tcp/54321
SPA_SERVER                  my.host.fqdn
SPA_SERVER_PORT             12345
KEY_BASE64                  SOME_BASE64_ENCODED_KEY
HMAC_KEY_BASE64             SOME_BASE64_ENCODED_HMAC_KEY
USE_HMAC                    Y
ALLOW_IP                    resolve

From this config, you can see that the fwknop port is 12345, and sshd is listening on 54321 (though these aren’t the real ports or FQDN in use). The KEY_BASE64 and HMAC_KEY_BASE64 values need to match between client and server. I chose to use symmetric keys but you can use asymmetric keys via GPG if you prefer.

See the fwknop documentation for more information on configuring everything. There are a lot of options, so you’ll have to figure out what to do based on your individual needs.

I’m using a free dynamic DNS service so that I don’t have to remember the dynamic IP address assigned by my ISP.

This has been tested with Debian on both stable/Wheezy and testing/Jessie. For Wheezy, make sure you are using the latest version from wheezy-backports.

Further Reading

The documentation is decent, and I’ve found this solution works very nicely for me, without exposing any detectible open ports on my network. Unlike with simple port knocking, it is virtually impossible for someone to use packet capture replay to access the system. Because all packets are dropped unless the authorization packet opens up the service port, it is completely undetectable via port scanning that fwknop is even in use.

Give it a try!

Feb 22, 2014 - Blog Migrated to Github

Comments

This blog is now being hosted on github pages using octopress.

A large part of the reason for this is diversification away from Google. As much as I love Google for many reasons (and will continue to be a faithful Android user for the near future), I am moving most of my stuff off Google infrastructure.

I’ll write more about the reasons for this later, as well as mention which services I’ve migrated to (email, search, browser, etc).

Please bear with me as I finish migrating all the content and fix up some formatting, images, and such here and there…

Update: Please see newer posts as I have abandoned Octopress for straight Jekyll. Octopress, as it turns out, makes things harder, not simpler.

Jan 12, 2014 - i3 - A Tiling Window Manager for Mortals

Comments

After kicking around a few different desktop environments and window managers, I've settled in with i3 as my window manager of choice - and no desktop environment at all. This is by far the most productive user interface I've used and is now in residence on my home laptop, work laptop, work desktop, and shiny new Intel-based Chromebook (as well as it's predecessor - an ARM-based Chromebook). I'm still using the Debian distribution of Linux (mostly Wheezy, but also one Jessie/Testing), which has been fantastic - and I can't recommend it enough.



You can read my previous blog for the few weeks that I used a BlueTile-derived xmonad configuration on top of the xfce desktop environment. While this worked pretty well most of the time, and was fairly productive, xmonad is a pain as it requires writing haskell code to change the configuration. A found this to be a bit burdensome over time - when I just want to tweak a setting I'd much rather just tweak the setting, and not have to debug code.

i3 is just about as flexible as xmonad, but everything is in a regular configuration file, so it doesn't require users to write their own window manager in haskell to get the configuration you want.

It's easy to plug-in different implementations in i3 for cases where the configuration isn't sufficient - for example I found the default i3 status bar to be a bit limiting for the configuration I wanted, so I handle that via conky, outputting in JSON (i.e. not using conky's X integration at all) to i3. System notifications come via dunst. dmenu is used as a launcher. Everything plays nicely together, and configuration is a snap.

Not running a regular desktop environment has not in any way been an issue. I can still use any application (for example, see "gnome-screenshot" used in the screenshot above, although more recently I've switched to "scrot"). I don't use graphical file managers as a general rule, and while I could probably install nautilus or thunar, I've found rox-filer works just as well and doesn't require many dependencies. Debian already includes the necessary wiring such that installing i3-wm sets up a lightdm session.

Suspend, shutdown, reboot, logout is handled via simple keybindings in i3 or from a terminal - I have no problem typing "sudo shutdown -h now", and I can type it just as fast as navigating to some menu.

I found that I was comfortable and productive in i3 within just a few days - you definitely have to make a commitment to learn keybindings and modes, and understand the container model, but once you do it's amazing how quickly you can navigate applications, workspaces, and desktops without ever having to take your hands off the keyboard. Learning how to effectively use workspaces for your workflow is super important - i3 allows several different layouts and each workspace can have it's own. Switching between layouts is a snap, and I find myself switching, for example, from a tiling layout to a tabbed layout to get a larger window. i3 remembers layouts, so switching back to tiling puts everything back how it was before. Very nice.

Anyone who's ever seen my desktop knows that I like to have a lot of terminal windows open, very specifically placed. In a traditional window manager doing this is painful - either I open a bunch of terminals, and manually move them around and resize them (I absolutely hate doing this), or write a script that starts all the terminals with the right geometry (also a painful operation, working out the geometry of each window). With i3, and a tiling layout, you never worry about window geometry or location - which is awesome. If I want 4 equally sized terminals on a workspace (with a horizontal tiling default layout), I use the following keystrokes - Super-Enter, Super-Enter, Super-v, Super-Enter, Super-Left, Super-v, Super-Enter. Once you learn the keybindings and container model, this sort of sequence becomes second nature and takes just a few seconds.

This configuration is really great and runs fast, with low resource usage - very important when running on a Chromebook (my install only requires around 2GB of disk) and gives more resources to things such as Java VMs when running on my fat work desktop.

I've decided not to dump my configuration in this post - code in blogger doesn't work all that well (see my previous post) and the configuration gets stale when I don't bother to update it (also see my previous post, which does not reflect my final configuration). Although it won't help with the latter, you can find my configs, at least some version of it, on github.

The i3 website is here - http://i3wm.org/

Oct 29, 2013 - My BlueTile based xmonad configuration

Comments

UPDATE: I've switched to the i3 window manager as of a couple of months back. i3 is really great - I recall someone saying something along the lines of "xmonad is not a window manager - it's a library for people who want to write their own". This is very true, and I don't miss hacking around in haskell since switching away from xmonad. Additionally, i3 doesn't require working around deficiencies in xmonad such as spoofing the window manager to make Java applications work

Screenshot

For the past couple of months I've been playing with several different Linux distributions, desktop environments, and window managers. I can backup, reinstall, and restore to a fully working state - even when changing distributions - within two or three hours, so the barrier of entry is fairly low for me. I do limit myself to the world of Debian-based systems, since apt is great and familiar, and there are lots of good reasons to live somewhere in the Debian ecosystem.

For several years I used Kubuntu, until KDE4 came out. KDE4 was released way before it was ready, and while I slogged along with it for about 6 months until I finally gave up and switched to Xubuntu with XFCE. I really like XFCE for the most part - it's simple and fast, but sometimes lacking in features and feels a bit old. When Unity came out, I gave it a try. A very short one. Unity is an unusable disaster. At this point I decided to abandon Ubuntu and give LinuxMint a try, which I did moving back to XFCE. I switched to Cinnamon when it came out, and I have to say I really like Cinnamon in general - it's based on Gnome Shell so is up-to-date, but it looks and works like Gnome2, making it super accessible and usable.

Although I think LinuxMint has some good points, it's based on Ubuntu and there's very little reason to use it over Ubuntu (if that's what you want to use), especially now that CInnamon is available for several different distributions, including Ubuntu as a PPA. So, I went back to Xubuntu with Cinnamon.

But, I'm really not very happy with Ubuntu. Go take a look at ubuntu.com - you won't find the word "linux" anywhere on that page. That's not acceptable. Ubuntu is a Linux distribution and they should advertise that fact. Ubuntu has shown poor direction in other ways as well, for example the horrid Unity interface and writing their own display system, Mir, to replace Xorg.

Ubuntu is a Debian distribution, so why not just install Debian? I probably should have thought about this a long time ago!

I'm now running Debian testing (Jessie) on 4 different computers - 3 are amd64 systems, 1 (the one I'm typing this on) an arm based Chromebook.

Debian was just as easy as LinuxMint or Ubuntu for installation - and supports LUKS so I can run with full disk encryption on my laptops (though not the Chromebook, unfortunately). LUKS/dm-crypt is the only way to go - encfs and ecryptfs are horrible hacks, in general. And all the packages I want are available in base Debian.

Jessie, even though it's "testing", does still lag a bit behind other distributions in some ways, but I've found it recent enough for everything I do - and not as risky as Debian unstable, Sid. I wasn't interested in Debian stable, as it's just too far behind for me.

OK, so after installing Debian I went with XFCE, which is a nice choice because all my hardware supports it, so I can use the same configuration and setup everywhere, even the Chromebook. XFCE is also familiar to me, having used it for a couple of years, generally happily.

For fun, I decided to give Gnome Shell a try. Many people have been pretty negative about Gnome Shell, but I actually found it to be pretty nice and usable, after I installed several extensions to get some better usability. With Gnome Shell you have to think a little bit differently about window management and make good use of workspaces to organize things. One thing I didn't like about Gnome Shell was how much I had to move the mouse to do things.

Still, I managed to get a pretty usable workflow out of Gnome Shell, and played with some nice tiling extensions - shellshape and shelltile, which sort of worked. However shellshape doesn't support multiple monitors (which I like on my work system), and shelltile was too mousy though a nice idea. If you are interesting in tiling window managers on Gnome Shell, and only have one monitor, give shellshape a try - it's pretty nice, with some shortcomings.

One issue with Gnome Shell is that it doesn't work on my Chromebook, since it doesn't have accelerated video (currently, until I get armsoc running on it). The fallback mode is usable, much like Gnome 2, but then I also couldn't run the nifty Gnome Shell extensions I discovered.

I've always been intrigued by tiling window managers, which automatically arrange windows such that nothing overlaps, and which typically have very simple interfaces that maximize screen real estate (for example, by not having title bars on windows). They also tend to be driven by keyboard, minimizing mouse usage.

I've tried xmonad before, but was scared away by having to basically write a configuration in haskell. I tried a few other tiling window managers as well, but never found any that I really liked or felt like I wanted to invest the time in them.

This takes me back to shellshape - which uses the same keybindings for BlueTile, a xmonad-based tiling window manager. I liked the key bindings in shellshape, so I thought maybe that would make BlueTile accessible. I was also curious if I could run xmonad it with XFCE so I could have my usual panels and a nice menu (the menu plugins for xmonad are... primitive at best).

So how did my experience go? Well, here I am typing this on my Chromebook, running XFCE and xmonad with a BlueTile based configuration. I used this same configuration on my work laptop and desktop all day today as well, and found it very productive and fast, but I sure have a lot of new keybindings to remember!

Below is my .xmonad/xmonad.hs file. In order to use this in Debian, all I needed to do was install the xmonad package, as it already has BlueTile in the base xmonad package! Note that there is a separate bluetile package - you can install this, but you will not be able to apply my customized settings, if you so desire.

It took me many hours to get this configuration working and looking the way I wanted to. I'm not completely happy with my approach to eliminating the window title bars, but it does work, though it's a bit hackish - and relies on having certain colors configured in XFCE (you may need to change "#cecece" below).

--
-- My BlueTile Configuration
--
-- BlueTile is a great place to start using xmonad, but I wanted to customize a number of things. I didn't feel like writing
-- my own xmonad implementation from scratch, so I use the xmonad-contrib BlueTile configuration with a few modifications to
-- make it work the way I want to. I'm using this inside an XFCE session, which provides my panels.
--
-- I'm new to xmonad and haskell, so this is a hack at best, but it gives me the look and behavior I want - and was a great
-- way to ease from BlueTile into a more custom xmonad configuration.
--
-- My blog: https://scotte.org
--
-- Differences from a vanilla BlueTile config:
--   * No titlebar (there's probably a better way to do this)
--   * focusFollowsMouse is enabled
--   * pointer follows focused window (middle of window)
--   * WM is spoofed to LG3D so Java apps work
--   * terminal is set to xfce4-terminal
--   * focusedBorderColor is red
--   * borderWidth is 2
--
-- Adapted from BlueTile (c) 2009 Jan Vornberger http://bluetile.org
--

import XMonad hiding ( (|||) )

import XMonad.Layout.BorderResize
import XMonad.Layout.BoringWindows
import XMonad.Layout.ButtonDecoration
import XMonad.Layout.Decoration
import XMonad.Layout.DecorationAddons
import XMonad.Layout.DraggingVisualizer
import XMonad.Layout.LayoutCombinators
import XMonad.Layout.Maximize
import XMonad.Layout.Minimize
import XMonad.Layout.MouseResizableTile
import XMonad.Layout.Named
import XMonad.Layout.NoBorders
import XMonad.Layout.PositionStoreFloat
import XMonad.Layout.WindowSwitcherDecoration

import XMonad.Hooks.CurrentWorkspaceOnTop
import XMonad.Hooks.EwmhDesktops
import XMonad.Hooks.ManageDocks
import XMonad.Hooks.SetWMName

import XMonad.Actions.UpdatePointer

import XMonad.Config.Bluetile

import XMonad.Util.Replace

myTheme = defaultThemeWithButtons {
    activeColor = "red",
    activeTextColor = "red",
    activeBorderColor = "red",
    inactiveColor = "#cecece",
    inactiveTextColor = "#cecece",
    inactiveBorderColor = "#cecece",
    decoWidth = 1,
    decoHeight = 1
}

myLayoutHook = avoidStruts $ minimize $ boringWindows $ (
                        named "Floating" floating |||
                        named "Tiled1" tiled1 |||
                        named "Tiled2" tiled2 |||
                        named "Fullscreen" fullscreen
                        )
        where
            floating = floatingDeco $ maximize $ borderResize $ positionStoreFloat
            tiled1 = tilingDeco $ maximize $ mouseResizableTileMirrored
            tiled2 = tilingDeco $ maximize $ mouseResizableTile
            fullscreen = tilingDeco $ maximize $ smartBorders Full

            tilingDeco l = windowSwitcherDecorationWithButtons shrinkText myTheme (draggingVisualizer l)
            floatingDeco l = buttonDeco shrinkText myTheme l

main = replace >> xmonad bluetileConfig {
    layoutHook = myLayoutHook,
    logHook = currentWorkspaceOnTop >> ewmhDesktopsLogHook >> updatePointer (Relative 0.5 0.5),
    focusFollowsMouse = True,
    borderWidth = 2,
    focusedBorderColor = "red",
    terminal = "xfce4-terminal",
    startupHook = setWMName "LG3D"
}

Aug 19, 2013 - Are your financial institutions' websites developed with agile practices?

Comments

If so, you are lucky - because mine sure aren't. Seems like every bank or other financial institution that I do business with is about a decade or so behind in web technology. They have very, very long and infrequent software development cycles, don't support recent (much less latest) client technologies, and have "major new feature releases" that are pretty darn uninspiring.

I can understand the importance of moving slow when it comes to people's finances, but it seems they don't really have a distinction between basic usability and those things that could result in serious financial exposure. In these days of distributed systems there's no excuse to muddle UIs with the backend.

I have three quick stories from the last year or so across two different financial institutions. They will remain nameless, although one of them (which comprises the first two stories) I'm in the process of closing accounts down, and in the third I have a great personal relationship with individual people - and even though their technology is pretty bad, the institution itself is excellent.

Story #1 - Please Downgrade Your Browser

So I've been using the institution's website several times a week with no problem, then suddenly it stops working from Chrome - I was unable to login, it would just redirect me to the login page. Maybe Chrome updated, I'm not really sure (my Linux distribution handles updates so well I seldom pay any attention into what get's updated). So, I try Firefox and it works fine, I am able to login to my account, so I fire off an email to customer support to let them know Chrome 17 (this is May 2012) doesn't work with their site. Following is the response:

I apologize that you have encountered this difficulty. Unfortunately, higher versions of chrome are unsupported with our website. When a browser version is unsupported the functionality will be intermittent at best. Sometimes it will work fine for months, and then one day stop working all together.

In order to continue using Chrome without issue, please use one of the following supported versions:

· Chrome: Versions 11 or 12

Once again, I do apologize for any inconvenience this may have caused. We are continually working on updating our supported browsers, but at this time those are the only truly supported versions.

Realize that the institution's website doesn't do anything complex - absolutely nothing that an update from Chrome 16 to 17 should cause inability to login. It's only because of using non-w3c compliant practices that this would happen in the first place, but it's absolute madness to suggest I would downgrade back 5 or 6 versions of Chrome! I'm not even sure how I would do that, does Google even have archives where you can download old versions?

At least they didn't tell me to use IE. That's the response I've gotten from a number of customer support interactions for various things over the years, even after telling them I'm not on Windows.

I don't remember how long it was before I could use the site again in Chrome, but it was at least several months, and I received no notification or follow-up, I just tried one day and it worked.

How would a more modern site deal with this? First of all, they would be testing not only the latest stable versions of browsers, but the bleeding edge, and be prepared. They would also be agile with the ability to test and push new releases daily, not monthly, to resolve fundamental usability issues. Good customer support would also dictate closing the loop with the customer, rather than leaving them hanging.

Story #2 - We Will Resolve This In 48 Hours

My paycheck was deposited into one institution, then partially transferred to another institution (the one in story #1) via an inter-bank ACH transfer initiated by the second institution. This had been working fine for a couple of years, then one day my transfer doesn't go through. Did they notify me of the failure? Nope. Did the account transfer page show anything interesting? Nope - the transfer just disappeared, unlike past transfers which had a history, it just vanished - like it never happened.

So, being the reasonable person that I am, I called customer service and opened a ticket with them. They said they would look into it, and it should be resolved in 48 hours. One month later they resolved the issue. You read that right, it took a month for their 48 hour fix. For the first 3 weeks I called them several times a week for a status, and their answer was always 48 hours - even after we were in this situation for 3 weeks. All they would tell me is that they were "investigating the issue" and that it would be resolved soon. Not knowing what was happening, and assuming that they would get the transfer through "real soon now" we ended up going low on funds in that institution and were basically unable to use our account. If I had known it would be weeks I would have deposited via other means, but when I'm told it's only a couple of days because they are nearly done with the fix I guess I was overly optimistic. To make matters worse, when I was concerned about automatic, scheduled payments, they told me there wouldn't be a problem because the problem was on their end - fool me once, shame on you... It was a problem because those payments still went through, even though they told me all was fine. As it turns out, they finally admitted it was a software bug in their ACH system, but for this institution it was too late as we had already taken our banking elsewhere.

How would a more modern company deal with this? This isn't strictly a site issue, but it was a back-end software problem, and repeatedly telling me it would be resolved in 2 days when in fact it would take 30 did nothing to help. I'm sure they didn't actually know, but better to give the customer a pessimistic answer than an unreasonably optimistic one. I arranged my finances based on what they told me and would have reacted completely differently otherwise. Additionally, I had to keep calling them for status, as they would never call me back after the promised 48 hours. They finally did call me back when it resolved, but by then it didn't matter anymore.

Story #3 -  We Are Experiencing Known Performance Issues

On a Saturday morning I logged into my account at this institution to check some activity. At least, that's what I was trying to do - the institution's landing page didn't mention a thing, and it was only after logging in that they displayed a page that the site was down for maintenance for the whole weekend.

Really? Down for the whole weekend for an upgrade? This is 2013!  And with zero notification in advance that the site would be down (they have my email address). And I had to login to even find out.

The last couple of companies I've worked at have prided themselves on the ability to do live rolling upgrades with no site outage. This is not a hard thing to do these days with a good architecture. Being down for 2 days would mean going out of business for a lot of places, but for a financial institution it's considered normal, I guess.

OK great, so now it's Monday morning and I try to login again. Right, so the page times out. I reload, and after several minutes, I finally get a page. Clicking on anything results in a similar pattern - either a timeout or page load time of several minutes. I tried to use the online feedback form, but that timed out too.

A call to customer service resulted in a long wait time with a message that they were aware of site performance issues due to huge demand for the upgraded site. I guess that's one way to put it, but clearly they did insufficient testing and weren't ready for Monday morning. There was no reason for me to waste support's time at that point, so I hung up and tried to login again later.

How great was the new site? It sucks just as bad as the old one did. It's really pretty bad.

How would a more agile company deal with this? They would test things in advance, do rolling updates rather than having a site outage for 2+ days, and route just a portion of traffic to the new site until being comfortable that all is well prior to a full cut-over.

What's the lesson from all of this? Financial institution IT is well behind the times and could learn a lot of things from agile companies who release continually. Again, I understand the financial risk, but a good architecture would insulate those details from the website and it's usability for customers.