Matt's Musings

July 12, 2014

GPG Key Management Rant

Filed under: Debian,Linux,WLUG / LinuxNZ — Administrator @ 12:17 pm NZST

2014 and it’s still annoyingly hard to find a reasonable GPG key management system for personal use… All I want is to keep the key material isolated from any Internet connected host, without requiring me to jump through major inconvenience every time I want to use the key.

An HSM/Smartcard of some sort is an obvious choice, but they all suck in their own ways:
* FSFE smartcard – it’s a smartcard, requires a reader, which are generally not particular portable compared to a USB stick.
* Yubikey Neo – restricted to 2048 bits, doesn’t allow imports of primary keys (only subkeys), so you either generate on device and have no backup, or maintain some off-device primary key with only subkeys on the Neo, negating the main benefits of it in the first place.
* Smartcard HSM – similar problems to the Neo, plus not really supported by GPG well (needs 2.0 with specific supporting module version requirements).
* Cryptostick – made by some Germans, sounds potentially great, but perpetually out of stock.

Which leaves basically only the “roll your own” dm-crypt+LUKS usb stick approach. It obviously works well, and is what I currently use, but it’s a bunch of effort to maintain, particularly if you decide, as I have, that the master key material can never touch a machine with a network connection. The implication is that you now need to keep an airgapped machine around, and maintain a set of subkeys that are OK for use on network connected machines to avoid going mad playing sneakernet for every package upload.

The ideal device would be a USB form factor, supporting import of 4096 bit keys, across all GPG capabilities, but with all crypto ops happening on-device, so the key material never leaves the stick once imported. Ideally also cheap enough (e.g. ~100ish currency units) that I can acquire two for redundancy.

As far as I can tell, such a device does not exist on this planet. It’s almost enough to make a man give up on Debian and go live a life of peace and solitude with the remaining 99.9% of the world who don’t know or care about this overly complicated mess of encryption we’ve wrought for ourselves.

end rant.

March 17, 2012

Kindle Reading Stats

Filed under: General — @ 11:08 am NZST

I’ve written before about my initial investigations into the Kindle, and I’ve learnt much more about the software and how it communicates with the Amazon servers since then, but it all requires detailed technical explanation which I can never seem to find the motivation to write down. Extracting reading data out of the system log files is however comparatively simple.

I’m a big fan of measurement and data so my motivation and goal for the Kindle log files was to see if I could extract some useful information about my Kindle use and reading patterns. In particular, I’m interested in tracking my pace of reading, and how much time I spend reading over time.

You’ll recall from the previous post that the Kindle keeps a fairly detailed syslog containing many events, including power state changes, and changes in the “Booklet” software system including opening and closing books and position information. You can eyeball any one of those logfiles and understand what is going on fairly quickly, so the analysis scripts are at the core just a set of regexps to extract the relevant lines and a small bit of logic to link them together and calculate time spent in each state/book.

You can find the scripts on Github: https://github.com/mattbnz/kindle-utils

Of course, they’re not quite that simple. The Kindle doesn’t seem to have a proper hardware clock (or mine has a broken hardware clock). My Kindle comes back from every reboot thinking it’s either at the epoch or somewhere in the middle of 2010, the time doesn’t get corrected until it can find a network connection and ping an Amazon server for an update, so if you have the network disabled it might be many days or weeks of reading before the system time is updated to reality. Once it has a network connection it uses the MCC reported by the 3G modem to infer what timezone it should be in, and switches the system clock to local time. Unfortunately the log entries all look like this:


110703:193542 cvm[7908]: I TimezoneService:MCCChanged:mcc=310,old=GB,new=US:
110703:193542 cvm[7908]: I TimezoneService:TimeZoneChange:offset=-25200,zone=America/Los_Angeles,country=US:
110703:193542 cvm[7908]: I LipcService:EventArrived:source=com.lab126.wan,name=localTimeOffsetChanged,arg0=-25200,arg1=1309689302:
110703:193542 cvm[7908]: I TimezoneService:LTOChanged:time=1309689302000,lto=-25200000:
110703:183542 system: I wancontrol:pc:processing "pppstart"
110703:193542 cvm[7908]: I LipcService:EventArrived:source=com.lab126.wan,name=dataStateChanged,arg0=2,arg1=:
110703:183542 cvm[7908]: I ConnectionService:LipcEventArrived:source=com.lab126.cmd,name=intfPropertiesChanged,arg0=
,arg1=wan:
110703:183542 cvm[7908]: W ConnectionService:UnhandledLipcEvent:event=intfPropertiesChanged:
110703:193542 wifid[2486]: I wmgr:event:handleWpasupNotify(<2>CTRL-EVENT-DISCONNECTED), state=Searching:
110703:113542 wifid[2486]: I spectator:conn-assoc-fail:t=374931.469106, bssid=00:00:00:00:00:00:
110703:113542 wifid[2486]: I sysev:dispatch:code=Conn failed:
110703:183542 cvm[7908]: I LipcService:EventArrived:source=com.lab126.wifid,name=cmConnectionFailed,arg0=Failed to connect to WiFi network,arg1=
:

Notice how there is no timezone information associated with the date/time information on each line. Worse still the different daemons are logging in at least 3 different timezones/DST offsets all interspersed within the same logfile. Argh!!

So our simple script that just extracts a few regexps and links them together nearly doubles in size to handle the various time and date convolutions that the logs present. Really, the world should just use UTC everywhere. Life would be so much simpler.

The end result is a script that spits out information like:

B000FC1PJI: Quicksilver: Read 1 times. Last Finished: Fri Mar 16 18:30:57 2012
- Tue Feb 21 11:06:24 2012 => Fri Mar 16 18:30:57 2012. Reading time 19 hours, 29 mins (p9 => p914)

...

Read 51 books in total. 9 days, 2 hours, 29 mins of reading time

I haven’t got to the point of actually calculating reading pace yet, but the necessary data is all there and I find the overall reading time stats interesting enough for now.

If you have a jailbroken Kindle, I’d love for you to have a play and let me know what you think. You’ll probably find logs going back at least 2-3 weeks still on your Kindle to start with, and you can use the fetch-logs script to regularly pull them down to more permanent storage if you desire.

November 24, 2011

How I’m voting in 2011

Filed under: General,Life,WLUG / LinuxNZ — @ 11:45 pm NZST

It’s general election time again in New Zealand this year, with the added twist of an additional referendum on whether to keep MMP as our electoral system. If you’re not interested in New Zealand politics, then you should definitely skip the rest of this post.

I’ve never understood why some people consider their voting choices a matter of national security, so when via Andrew McMillan, I saw a good rationale for why you should share your opinion I found my excuse to write this post.

Party Vote
I’ll be voting for National. I’m philosophically much closer to National than Labour, particularly on economic and personal responsibility issues, but even if I wasn’t the thought of having Phil Goff as Prime Minister would be enough to put me off voting Labour. His early career seems strong, but lately it’s been one misstep and half-truth after another, the remainder of the Labour caucus and their likely support partners don’t offer much reassurance either. If I was left-leaning and the mess that Labour is in wasn’t enough to push me over to National this year then I’d vote Greens and hope they saw the light and decided to partner with National.

Electorate Vote
I live in Dublin, but you stay registered in the last electorate where you resided, which for me is Tamaki. I have no idea who the candidates there are, so I’ll just be voting for the National candidate for the reasons above.

MMP Referendum
I have no real objections to MMP and I think it’s done a good job of increasing representation in our parliament. I like that parties can bring in some star players without them having to spend time in an electorate. I don’t like the tendency towards unstable coalitions that our past MMP results have sometimes provided.

Of the alternatives, STV is the only one that I think should be seriously considered, FPP and it’s close cousin SM don’t give the proportionality of MMP and PV just seems like a simplified version of STV with limited other benefit. If you’re going to do preferential voting, you might as well do it properly and use STV.

So, I’ll vote for a change to STV, not because I’m convinced that MMP is wrong, but because I think it doesn’t hurt for the country to spend a bit more time and energy confirming that we have the right electoral system. If the referendum succeeds and we get another referendum between MMP and something other than STV in 2014, I’ll vote to keep MMP. If we have a vote between MMP and STV in 2014 I’m not yet sure how I’d vote. STV is arguably an excellent system, but I worry that it’s too complex for most voters to understand.

PS. Just found this handy list of 10 positive reasons to vote for National, if you’re still undecided and need a further nudge. Kiwiblog: 10 positive reasons to vote National

June 13, 2011

Using StartCom Free SSL certificates with Cyrus imapd

Filed under: Linux — @ 9:12 am NZST

A stumbled across Start Com a few months ago, an Israeli company that run a Certificate Authority (CA) called StartSSL with a root certificate in all the modern browsers and operating systems. Best of all they don’t participate in the cartel run by the rest of the SSL certificate industry and offer domain validated certificates at the price it costs them to issue them – nothing.

I had the first opportunity to use their services today when I needed an SSL cert to secure the IMAP server I run for my parents and I was very pleased with the experience. The web interface is a bit weird and you have to jump through some strange hoops, but to save paying more money to the SSL certificate cartel it seemed more than worthwhile.

Like most CAs these days the certificate which signs your server certificate is not the actual root certificate included in your operating system or browser, but an intermediate CA certificate which is in turn signed by the root certificate. This means that you have to ensure that your server includes the intermediate CA certificate alongside the server certificate so the client can validate the entire path back to the root.

Unlike Apache which explicitly allows you to specify a certificate chain file, the openssl methods used by Cyrus 2.2 only seem to recognise a single CA certificate in the file pointed to by tls_ca_file. All as not lost however, as the openssl libraries are actually quite smart and will automagically determine which intermediate certs they need to bundle into the handshake if you install them appropriately under /etc/ssl/certs (at least on Debian).

The trick is that you have to install the intermediate CA cert into a file named after the hash of the certificate, like so:

# wget http://www.startssl.com/certs/sub.class1.server.ca.pem -O /etc/ssl/certs/startcom-class1-intermediate.pem
# hash=$(openssl x509 -hash -noout -in /etc/ssl/certs/startcom-class1-intermediate.pem)
# ln -s ./startcom-class1-intermediate.pem /etc/ssl/certs/${hash}.0
# ls -l /etc/ssl/certs/${hash}.0
lrwxrwxrwx 1 root root 34 2011-06-13 07:43 /etc/ssl/certs/ea59305e.0 -> ./startcom-class1-intermediate.pem

Then in imapd.conf:

tls_cert_file: /etc/ssl/certs/your-server-cert.pem
tls_key_file: /etc/ssl/private/your-server-key.key
tls_ca_file: /etc/ssl/certs/startcom-ca.pem

Voila. Works everywhere I’ve tried so far.

Start Com – Highly Recommended. I’ll be using them for any future SSL certificate purchases (e.g. EV certs) that I need to make.

May 12, 2011

Linux ignores IPv6 router advertisements when forwarding is enabled

Filed under: Linux — @ 11:26 am NZST

IPv6 adoption is increasing, and along with it come a new set of behaviours and defaults that system administrators and users must learn and become familiar with. I was recently caught out by Linux’s handling of IPv6 router advertisements (RAs) when forwarding is also enabled on the interface. It took me a while to figure out and searching for obvious terms (such as those in the first half of the title of this post) didn’t immediately yield useful answers, so here is my attempt to help shed some light on the subject.

By default Linux will ignore IPv6 RAs if the interface is configured to forward traffic. This is in line with RFC2462 which states that a device should be either a Host or a Router. If you’re forwarding packets you’re a router and you’re therefore expected to be sending RAs, not receiving them. This policy does make a certain amount of sense but there are obviously situations where it can be useful to accept RAs and still forward packets over the interface[0]. The confusing part is that the Linux IPv6 stack allows the accept_ra sysctl to be set to 1 (enabled) at the same time as the forwarding sysctl is set to 1, yet all incoming RAs are ignored with no hint as to why. If you’re not aware that the default behaviour is to ignore RAs when forwarding is enabled it looks very much like autoconfiguration has simply broken.

The key piece of information is that makes everything as clear as mud is realising that the forwarding and accept_ra sysctl’s are not simple boolean enabled/disabled flags like many of their brethren. There are instead three possible values for each, all clearly documented in sysctl.txt, when you take the time to read it. Ironically the documentation states the type of the values as “BOOLEAN” even though they’re not… at least it helped me to feel better about my hasty assumption that the sysctl’s were boolean values.

accept_ra – BOOLEAN
Accept Router Advertisements; autoconfigure using them.

Possible values are:
0 Do not accept Router Advertisements.
1 Accept Router Advertisements if forwarding is disabled.
2 Overrule forwarding behaviour. Accept Router Advertisements
even if forwarding is enabled.

Functional default: enabled if local forwarding is disabled.
disabled if local forwarding is enabled.

The documentation for forwarding is similar, but much longer, so you can refer to the link above to see it.

Conclusion: If you want to autoconfigure IPv6 addresses on an interface that you’re also forwarding IPv6 traffic over, you need to set accept_ra to 2.

No doubt there are more IPv6 quirks and defaults like this waiting to trap me in the future :)

[0] Arguably you really don’t want to be autoconfiguring addresses on your router ever, but that’s a philosophical debate that isn’t really relevant to this post.

December 7, 2010

Under the cover of the Kindle 3

Filed under: Linux — @ 12:52 pm NZST

For my birthday back in October, my wonderful wife gave me a Kindle 3 from Amazon. I’d been considering other e-book readers for quite some time, but I had mostly ignored the Kindle due to the lack of EPUB support and a general dislike of Amazon’s DRM enforcement. In the end, the superior hardware and ecosystem of the Kindle overpowered those concerns and overall I’m very pleased with the purchase. The screen is amazing, literally just like reading off a piece of paper and the selection of books is OK. I’ve been buying almost all my books from Amazon to date since it’s so easy (the Whispernet is amazingly quick!) but it’s not terribly difficult to get EPUBs from elsewhere onto the device after a quick run through Calibre to turn them into a MOBI file, so I keep telling myself I’ve still got some flexibility.

Almost as much fun as reading on the device has been learning about how it works. The Mobile Read forums have lots of step by step posts on how to do specific tasks like replacing the screensaver image, but they don’t give much background detail on how the Kindle is actually operating which is what really interests me. Luckily among all the step by step posts I also found a “usbnetwork” package which also adds an SSH server to the Kindle, so after installing that and then SSHing in to my Kindle I’ve been poking around.

Under the cover the Kindle reveals a fairly standard Linux installation. While the hardware and IO devices are obviously unique, compared to something like an Android phone, the Kindle is refreshingly “normal”.

Hardware

  • Freescale MX35 ARMv6 based CPU with Java specific instruction support.
  • 256MB RAM
  • 4GB of internal flash presented as an SDHC device with four partitions. A ~700MB root partition, a ~30MB /var/local partition, another roughly 30MB kernel partition and then the rest (~3.1G) as the writeable “user” partition where your books and other content are stored. The root and /var/local partitions are ext3! (not jffs or some other more traditional flash based file system) while the user partition is vfat for easy use with Windows, etc.
  • The board is code-named ‘luigi’ and there are lots of references to ‘mario’ and ‘fiona’ scattered around the device, and even in some URLs on Amazon’s website. Someone was obviously a Super Mario fan.
  • The wireless chipset is Atheros based using the ar6000 drivers.
  • The WAN (3G) modem presents itself as a USB serial device and is controlled via a custom daemon named ‘wand’ which uses the standard Linux pppd package to establish IP connections over a private APN Amazon (provided by Vodafone here in Ireland).
  • The EInk display shows up as some special files under /proc rather than as a device. With a bit of digging I found some simple constants that when written to the proc files cause the screen to display the standard boot/progress/upgrading images. I haven’t deciphered how to make more complex updates to the display yet.

Software

  • The kernel is based on Linux 2.6.26, with a bunch of hardware specific patches and drivers from lab126.com, an Amazon subsidiary who appear to be responsible for much of the low-level driver and device development.
  • Lots of familiar open source projects are present, e.g. syslog-ng, DBus, busybox, pppd, wpa_supplicant, gstreamer, pango, openssl and the list goes on. You can download all the sources from Amazon’s website. I haven’t spent any time to see what if anything has been modified.
  • There were a few unexpected finds as well such as GDB and powertop! No doubt useful for the developers, but highly unlikely to actually be used on a shipping Kindle.
  • Boot-up is controlled by a set of sysv style init scripts which setup the filesystems and then start a handful of daemons to look after the low-level subsystems (network, power, sound) as well as the standard syslog and cron daemons you’d expect to see on any Linux box.
  • Once the basic system is up and running the init scripts kick off the “framework” which lives under /opt/amazon/ebook and consists of lots of Java classes. The system uses the cvm Java environment from Sun/Oracle which is specialised for embedded low-memory devices like this. The framework appears to take over most of the co-ordination, management and interaction tasks once it has started up.

The application/framework code is heavily obfuscated apparently using the Allatori Java Obfuscator. The jrename and jd-gui utilities have proven very handy in helping to untangle the puzzle, although they still only leave you with a pile of Java source code with mostly single letter alphabetic variable and class names! I’ve been using IntelliJ’s support for refactoring/renaming Java code to slowly work through it (thanks in large part to error/log messages and string constants found through the code which can’t be obfuscated easily and help to explain what is going on), and I’m slowly beginning to piece together how the book reading functionality works. I’ll maybe write more on this in a future post.

In one of my initial tweets about the Kindle I mentioned that it seemed to be regularly uploading syslog data to Amazon based on some sendlogs scripts I’d noticed and a few syslog lines containing GPS co-ordinates that had been pasted on the Mobile Read forums. I can’t find any trace of GPS co-ordinates in any syslog messages I’ve seen on my device, but there is definitely information about the cell sites that my Kindle can see, the books that I’m opening and where I’m up to in them:

101206:235431 wand[2515]: I dtp:diag: t=4cfd77b7,MCC MNC=272 01,Channel=10762,Band=WCDMA I IMT 2000,Cell ID=1362209,LAC=3021,RAC=1
,Network Time=0000/00/00 00.00.00,Local Time Offset=Not provided,Selection Mode=Automatic,Test Mode=0,Bars=4,Roaming=1,RSSI=-88,Tx
Power=6,System Mode=WCDMA,Data Service Mode=HSDPA,Service Status=Service,Reg Status=Success,Call Status=Conversation,MM Attach St
ate=Attach accept,MM LU State=LU update,GMM Attach State=Attach accept,GMM State=Registered,GMM RAU State=Not available,PDP State=
Active,Network Mode=CS PS separate attach mode,PMM Mode=Connected,SIM Status=Valid; PIN okay; R3,MM Attach Error=No error,MM LU Er
ror=No error,GMM Attach Error=No error,GMM RAU Error=Not available,PDP Rej Reason=No error,Active/Monitored Sets=0;39;-11 1;180;-1
5,RSCP=-111,DRX=64,HSDPA Status=Active,HSDPA Indication=HSDPA HSUPA unsupp,Neighbor Cells=,Best 6 Cells=,Pathloss=,MFRM=,EGPRS Ind
ication=,HPLMN=,RPLMN=272;01 ,FPLMN=234;33 234;30 234;20 272;05 ,n=1:

101206:235758 cvm[3426]: I Reader:BOOK INFO:book asin=B003IWZZ3Y,file size=233168,file last mod date=2010-11-27 19.18.22 +0000,con
tent type=ebook,length=MobiPosition_ 465747,access=2010-12-06 09.44.32 +0000,last read position=MobiPosition_ 464387,isEncrypted=f
alse,isSample=false,isNew=false,isTTSMetdataPresent=false,isTTSMetadataAllowed=true,fileExtn=azw:

101206:233416 udhcpc[5639]: Offer from server xxx.xxx.2.254 received
101206:233416 udhcpc[5639]: Sending select for xxx.xxx.2.10...

Interestingly you can see from the last two lines, that Amazon has taken some care to preserve privacy by not including the full IP address given to the device by my local Wifi network, so in light of that I find it interesting that they decided not to obfuscate the Cell and Book IDs in those respective log messages too. Seems rather inconsistent.

As to how and when these logs are sent to Amazon, the picture is a little bit murky. Every 15 minutes tinyrot runs out of cron and rotates /var/log/messages if it is greater than 256k in size. Rotated logs are stored into /var/local/log under filenames like messages_00000044_20101207000006.gz and alongside the log files are a set of state files named nexttosendfile, messages_oldest, messages_youngest. Something regularly sweeps through this directory to update the state and remove the old logs (after sending them up to Amazon I assume). I suspect that something is buried in the Java application code mentioned above.

On the whole the Kindle is a fascinating piece of technology. It delivers a wonderful reading experience on top of a familiar Linux system and is going to provide me with many more hours of entertainment as I unpack all the tricks and techniques that have gone into this device. I would recommended it as a present for geeks everywhere.

March 29, 2010

Initial Review of Xero Personal

Filed under: General — @ 12:10 pm NZST

I’ve been eagerly looking forward to the release of Xero Personal which has been heavily promoted by Xero and BNZ (as MoneyMap) for the last few months. Unfortunately my first impressions of the product today are extremely underwhelming. Xero Personal is definitely not worth anywhere close to $5/month for me at this point in time and I’m unlikely to even keep using the free trial.

To set the context for that statement, Xero Business set the bar high. I first used the original version of Xero while it was still in beta and even then it was clear that it was an application that took accounting to a new level and would provide an order of magnitude improvement in how I maintained the accounts for our business. That promise held true once we started paying for it, even though the cost of Xero is more than 10% of our annual expenses, the time and hassle it saves makes it a worthwhile investment. By contrast today’s release of Xero Personal offers nothing new above existing personal finance websites or desktop packages and would take me extra time to use as it fails to handle many of the basic transactions that a normal household will encounter.

The way Xero Personal works is by having you manually upload your bank statements (the automatic import functionality that is so useful in the business version of Xero has been restricted to BNZ MoneyMap customers only). For each transaction you are asked to provide two pieces of information. The first is a category which serves as a basic form of account to track expenses and income. For each category you can set a spending or saving goal which Xero will help you track progress towards. The second is a name to identify the other party in the transaction. Xero Personal comes pre-loaded with some fairly generic categories. Annoyingly you’re restricted to no more than 8 additional custom categories and the names associated with each transaction are are simple strings – you can’t link a transaction to another account or entity. To represent a transfer you need two separate transactions, one in each account, which you assign to the special category “Transfer” so that Xero knows to essentially ignore it. Nothing links the transactions together or ensures that the values balance.

In addition to the basic categorisation functionality the application also attempts to track your assets and liabilities (bank accounts and credit cards show up automatically) so that it can compute your net worth. Unfortunately as soon as you try and use this you hit the problem that there is no way to link transactions from your accounts back to your assets and liabilities. This means unless you regularly and manually update your assets and liabilities the “net worth” calculation only takes into account changes in your cash position and becomes blatantly incorrect.

As an example, take the common case of a household with a weekly mortgage (or other loan) repayment. You want the weekly payment to decrease the balance of your current account, increase the balance of your interest expense category and decrease the value of your mortgage liability. Your net worth should decrease by the value of the interest expense only, as the decrease in the value your mortgage liability offsets the remainder of the decrease in the value of your current account.

Xero Personal doesn’t come close to being able to handle this example today. The ability to split payments to different categories has also been left out (even though it’s present in Xero Business and therefore presumably in the underlying engine) so your only option is to categorise the entire payment as a mortgage or housing expense, decreasing your net worth by the full value of the payment. Even if you could split the payment between two categories, one for the interest and one for the principal the inability to link the category for the principal to the liability account means the net worth calculation will still be incorrect.

Maybe I’m being to hard on this newly released product? It is a SaaS application after all and Xero has an excellent history of releasing regular updates to the business version of Xero. The reason I’m so surprised and disappointed by this initial release is that it essentially lacks any double-entry accounting support – many of the missing features are core functionality that is already implemented in the accounting platform that supports the business version. Assuming that Xero Personal is built on the same platform (and that would be the obvious choice wouldn’t it?) the fact that Xero Personal has been released and is being heavily promoted without these features (compared to the initial version of Xero Business which was fully functional and obviously awesome even in beta) suggests to me that it’s a conscious decision to significantly limit the scope and usefulness of the application rather than simply a limit on what could be implemented before the initial release.

I sincerely hope that I’m wrong and that the coming months bring significant improvements to the functionality of Xero Personal, but until it can support common transactions like mortgage repayments correctly I won’t be using it or recommending it to anyone.

June 28, 2009

Political Compass

Filed under: General,Life — @ 12:32 pm NZST

It’s been a while since I’ve taken any sort of quiz like this, so when David Farrar from Kiwblog posted his results today it prompted me to give it another go.

My Political Views
I am a center-right moderate social libertarian
Right: 1.33, Libertarian: 1.97

Political Spectrum Quiz

I completed the quiz pretty quickly and felt the need to answer ‘it depends on the specifics’ to many of the questions, so take the results with a grain of salt. I think it is a reasonably accurate description of me though.

June 26, 2009

GPG Keysigning Update

Filed under: Debian,WLUG / LinuxNZ — @ 12:56 pm NZST

From the better late than never category… I finally got around to signing keys from the LCA2006 key signing party, the verification sheet from which has travelled with me from NZ to Dublin and then sat on my desk for a few years. I inevitably lost a few of my notes and verifications along the way, so if you were still expecting a signature from me and didn’t get one let me know!

The main hold up for me has been that my previous key signing system, a home grown script, was overly complex and involved me sending an encrypted token to each UID that I waited to receive back before issuing the signature. Lots of work for me, and much hassle for those whose keys I am signing. I’ve reverted back to the more standard method of signing and encrypting the signature to each UID and then throwing my copy of the signature away. Unless the recipient controls the UID and can decrypt the message, the signature will never be released to the world.

I’ve adopted pius as my new signing tool of choice, with a few extra patches to help me maintain my database of signature details and the corresponding verification pages at http://www.mattb.net.nz/pgp/signatures which are linked from the Policy URL packet of each signature I make. I guess I’ll tidy up the patches over the next few days and see if there is any interest in getting them merged.

February 24, 2009

The government listened!

Filed under: Debian,General,WLUG / LinuxNZ — @ 1:07 pm NZST

I was very pleased to wake up this morning to the news that National has delayed the introduction of S92A via an order-in-council. It’s a nice short-term victory, but I’ll save the champagne until the law is fundamentally rewritten.

The most pleasing aspect of the decision is simply that it was made at all. Within two weeks, a small band of protesters were able to harness the power of the Internet to direct international attention and place enough pressure on a Government, whose Prime Minister admitted to not having read the bill prior, that he then took the time to understand the issues and personally announce the delay in implementation of the law. We owe much thanks to the Creative Freedom Foundation for all the effort they put into co-ordinating the protest and ensuring that a single coherent message was presented. Just a little bit of my cynicism and belief that politicians never listen to public opinion outside of election campaigns was chipped away today.

The reason I’m not breaking out the champagne yet is that we’ve only achieved a temporary reprieve in the commencement of the law. While those present at the press conference seem somewhat confident that John Key didn’t like what he found in the law and would have repealed it if given the chance, all that has actually been done is delay it in the hopes of an agreement between the TCF and the “rights holders” (aka big media companies) on how to implement the still fundamentally broken law. The Government has given until late March for that to occur.

To put this into a more global context. My happiness as I took the bus to work after reading about the decision to delay the law was short lived as the front page of the local paper declared that Eircom (Ireland’s equivalent of Telecom) has “voluntarily” agreed to block sites such as The Pirate Bay upon request by the media companies (this comes a week after they also announced an agreement to, again “voluntarily”, implement a 3-strikes S92A style policy). Now, with the biggest ISP in their pocket (so to speak), the media companies have sent threatening letters to the remaining ISPs in the country demanding they implement the same procedure.

To me, this illustrates one of the fundamental problems with S92. The concept that an ISP is liable for the conduct of its users, or for policing where on the Internet users should and shouldn’t be able to connect to does not belong in our laws. Most ISPs already have provision to disconnect customers for illegal activity in their terms and conditions. If an end-user is doing something illegal, that is an issue between the rights holder and the end-user to take up in the courts just like every other sector of society must do when wronged, at which point the existing ISP terms and conditions can be invoked and access terminated.

The big media companies, having decided that it is too expensive/hard/inconvenient to follow standard legal procedures to resolve their grievances are launching multi-pronged attacks to shift the playing field in their favour. In countries like New Zealand, where our politicians yearn for a Free Trade Agreement with America, they use their lobbyists to ensure that S92 style laws are part of the conditions. In other jurisdictions, like Ireland, they use strong-arm, divide and conquer style bully tactics outside of the political and legal process.

I don’t support copyright infringement. I rely on copyright to protect much of the work I place on the Internet, I want strong laws that protect me when my rights have been infringed. I don’t believe that such laws should come at the expense of due process, our legal tradition and the basic principle of fairness! I don’t believe that copyright infringement is such a heinous crime that it demands punishments stronger than those we deliver to paedophiles, stalkers or any other class of criminal who uses the Internet to enable their crimes.

To me, today’s (yesterday’s – depending on your timezone) decision is only the first step in clawing New Zealand back from the dangerous path that the big media companies have been leading our law makers down. From here we need to press on and demonstrate to the Government over the next month that even if the TCF and rights holders are able to come up with some sort of workable code of practice, the law is still fundamentally flawed. It is based on premise that we are guilty by accusation.

Even if guilt were to be proved by a competent legal body (eg. court or copyright tribunal) we don’t need laws placing further liabilites onto ISPs (and remember the definition of ISP under this amendment act includes businesses who provide Internet access to staff, libraries, schools and hospitals) when their existing terms and conditions already prohibit illegal activity.

Finally, and most importantly of all, we need to remember that laws exist to serve all sectors of society. Yes, copyright infringement is against the law and rights holders are reasonable in expecting the law to protect their content and allow them to make a fair profit. On the other side of the fence, average New Zealanders are not being unreasonable in their desire to have media available electronically, on demand and non-inhibited by DRM following a legal purchase. The failure of the media businesses to adequately cater to this change in market demand and usage of technology is obviously a contributing factor to the widespread copyright problems that they are facing today.

Obviously, I’m not condoning copyright infringement simply because the media companies are failing to address demand. Even stupid laws must be obeyed (and the concept of copyright is far from stupid). What I want to see is the Government acknowledging that the problem is not solely with consumers infringing copyright for malicious purposes, and therefore that the solutions do not lie solely in increasing the enforcement and punishments available.

Copyright has always been a balancing act between the rights of content producers and consumers. S92 and the act it is contained within are taking us far too far down the road of catering to big business and their outdated business models with far too little concern for the rights of the individual consumer.

Despite the many submissions made on this act last year when it was first passing through parliament, there was no comprehensive debate on what copyright means and how it should balance the rights of content producers and consumers in our digital century where copying is a zero-cost, zero-thought activity. Without such a debate we’re doomed to continue wasting time arguing over the symptoms of the problem, like S92.

So, I’m saving my champagne for the day when we as a country address these issues and come up with a fair and workable interpretation of what copyright means today.

Next Page »

Powered by WordPress