Wednesday, December 31, 2014

Getting All My Mouse Buttons to Work in Linux

Introduction

When I got my new computer I bought a Logitech m510 mouse with 9 buttons.

Logitech M510 Mouse

For day to day use, and because I had other things to get working (like my keyboard), I decided to live with the out-of-the-box functionality of the mouse. I figured that when I had a burning need to use the additional buttons, I would look into them.

Many years ago I had a mouse with two thumb buttons and it was awesome for playing TFC. I had button mapped to the two types of grenades. The setup works nicely because you are not force to hold down a button on the keyboard, while trying to also press your movement keys.

Alas, it has been many many years since I have played online games, however this weekend, I found myself playing Metro Last Light. The default mapping for the alternate weapon and melee are kind of cumbersome. Suddenly I remembered my mouse had all these extra buttons and how well they worked in TFC. Finally I had a reason to set them up.

The Setup

At first, maybe naively, I tried to map them directly in the game. Unfortunately the game did not register the button clicks at all. After a bit, I thought, No worries, I’ll just map their clicks to the buttons defaulted in the game, somehow.

Making Sure the Buttons Work

The first thing to do was to see if the buttons even register to the OS. To do that, you can use xev:

$ xev | grep button

Once the little window loads, move your mouse over and start clicking away. Sure enough all my buttons were showing up. If you find that some of your buttons are not working, you will have to modify your xorg.conf file.

Mapping the Buttons

A little Google searching revealed that there is more than one way to remap keys. I know, shocking!

I decided to start with xbindkeys

swoogan@workstation:~$ xbindkeys
The program 'xbindkeys' is currently not installed. You can install it by typing:
sudo apt-get install xbindkeys

swoogan@workstation:~$ xbindkeys
Error : /home/swoogan/.xbindkeysrc not found or reading not allowed.
please, create one with 'xbindkeys --defaults > /home/swoogan/.xbindkeysrc'.
or, if you want scheme configuration style,
with 'xbindkeys --defaults-guile > /home/swoogan/.xbindkeysrc.scm'.

swoogan@workstation:~$ xbindkeys --defaults > /home/swoogan/.xbindkeysrc

Then you just need to define the mappings in your config file. For example:

# Gren
"xte 'key c'"
  b:9

# Melee
"xte 'key v'"
  b:8

To me they seem to define things in reverse. The pattern is:

# Name
"Action I want to perform"
  Event I want to trap to perform said action

Seems like a value = name sort of arrangement, but I digress. xte is a tool that will allow you to simulate button and key presses. It’s actually intended to create fake input for testing purposes. You can read the man page here.

swoogan@workstation:~$ xte 'key c'
The program 'xte' is currently not installed. You can install it by typing:
sudo apt-get install xautomation
swoogan@workstation:~$ sudo apt-get install xautomation
swoogan@workstation:~$ xte 'key c'
swoogan@workstation:~$ c

Now my two mouse buttons fire ‘c’ and ‘v’, which will work for Metro.

Except it Will Not

After loading the game I found that the key press events (xte 'key c' and xte 'key v') do not fire when in fullscreen game mode. I have not had time to look into why and to see if there is a way around this.

Final Thoughts

There are a couple of things that I would like to refine:

  1. This one is pretty obvious, it makes more sense to map the buttons to keys that are a little more obscure. For example, a modifier key like Alt or Ctrl might be better because it would be less likely to accidentally type in a document with an errant mouse click.
  2. I would like to see if this can be configured per application. I think that, in general use, I would like these buttons to be my browser forward and back buttons.
  3. The mouse actually has two additional buttons: the scroll-wheel tilts left and right. I am not really sure what I should do with those in general use. I find the left tilt very hard to execute without also pressing the wheel down.

For the second issue, I am sure I could wrap my application with as script like the following:

#!/bin/bash
killall xbindkeys && xbindkeys -f xbindkeys.someapp
someapp
killall xbindkeys && xbindkeys

But that seems pretty messy. If I find a better solution, I will be sure to write about it.

Wednesday, December 24, 2014

PowerShell Help File Authoring Woes

Poor Documentation

I was really hoping to write some killer help for (my cmdlets)[https://github.com/Swoogan/Octopus-Cmdlets], but I did not know where to start. If you search around the web and MSDN you might find the only two “helpful” files:
http://msdn.microsoft.com/en-us/library/dd878343(v=vs.85).aspx
and
http://msdn.microsoft.com/en-us/library/bb525433(v=vs.85).aspx

I was not able to find anything in the way of blogs or community documentation. There are bits and pieces, here and there, but they are mostly geared toward advanced functions (script cmdlets).

Note that there are several flaws in the second article that held me up quite a bit. For one, they say to “add the following XML headers to the text file” and list <helpItems xmlns="http://msh" schema="maml">. This is not a header. It is an opening tag that needs to be closed at the end of the document. Second, the example does not show either of the “headers”. Would it kill them to ever give an example that is complete? Finally, the page does not indicate where this file should be saved, nor does it link to the separate document that does (the first one).

Writing a Sane Example

I do not know why it would be so hard to write:

Save the following file to SampleModule\en-US\SampleModule.dll-help.xml

    <?xml version="1.0" encoding="utf-8" ?>
    <helpItems xmlns="http://msh" schema="maml">
      <command:command xmlns:maml="http://schemas.microsoft.com/maml/2004/10" xmlns:command="http://schemas.microsoft.com/maml/dev/command/2004/10" xmlns:dev="http://schemas.microsoft.com/maml/dev/2004/10">
        <command:details>
          <command:name>Get-Something</command:name>
          <command:verb>Get</command:verb>
          <command:noun>Something</command:noun>          
          <maml:description>
            <maml:para>Gets one or more somethings.</maml:para>
          </maml:description>
        </command:details>
      </command:command>
    </helpItems>

I am not sure what the redundant name, verb, and noun nonsense is about.

And it All Goes Downhill

When I started researching I was ecstatic to learn that the format is XML. I was so looking forward to writing 100% more tags than text. Remember, it is always a good idea to make writing documentation more of a pain than it already is. There is nothing like a boatload of friction to motivate people.

Oh well, after way too much time fighting the bad documentation I finally got my help file to load. What an achievement!

Here is what the system generated documentation looks like (aka no help file):

NAME
Get-Something

SYNTAX
Get-Something [[-Name] <string[]>] [<CommonParameters>]

Get-Something -Id <Int32[]> [<CommonParameters>]

And with my file:

NAME
Get-Something

SYNOPSIS
Gets one or more somethings.

SYNTAX

DESCRIPTION

RELATED LINKS

Hey, what the heck happened to my SYNTAX??? Really? If I do not define it, it goes away? REALLY???

Once again Microsoft forces you into an all-or-nothing proposition. Want to add SYSNOPSIS fields to all your cmdlets? Boom! All your SYNTAX descriptors are gone. But the SYNTAX was perfect the way it was!!!

Good thing making documentation is so much fun, or I might have started to get discouraged at that point.

WTF???

Well the only thing left to do is find out how to document the syntax. All I wanted to do was add a synopsis to each (for now) but lets see what it would take to bring this up to snuff. Here is the document on writing syntax:
http://msdn.microsoft.com/en-us/library/bb525442(v=vs.85).aspx

That is right boys and girls, you have to take all the cmdlets, in all their variations and meticulously write XML to define the usage. Were you thinking you could maybe just copy the generated output to a syntax element like so?

<command:syntax>Get-Something [[-Name] <string[]>]  [<CommonParameters>]</command:syntax>

Hahahahahahahaha hahaha haha ha…

The best part is the you get to keep your documentation sync every time you change your cmdlets. Now I am faced with a dilema:

  1. Release my young and rapidly changing project with no documentation.
  2. Write the documentation now and have to change it over and over as the API matures.

What wonderful options.

For the record, this is what the syntax XML looks like for a single variation of a single command:

   <command:syntax>
      <command:syntaxItem>
        <command:name>Invoke-psake</command:name>
        <command:parameter require="false" variableLength="false" globbing="false" pipelineInput="false" postion="0">
          <maml:name>buildFile</maml:name>
          <command:parameterValue required="false" variableLength="false">String</command:parameterValue>
        </command:parameter>
        <command:parameter require="false" variableLength="false" globbing="false" pipelineInput="false" postion="0">
          <maml:name>taskList</maml:name>
          <command:parameterValue required="false" variableLength="false">String[]</command:parameterValue>
        </command:parameter>
        <command:parameter require="false" variableLength="false" globbing="false" pipelineInput="false" postion="0">
          <maml:name>framework</maml:name>
          <command:parameterValue required="false" variableLength="false">String</command:parameterValue>
        </command:parameter>
        <command:parameter require="false" variableLength="false" globbing="false" pipelineInput="false" postion="0">
          <maml:name>docs</maml:name>         
          <command:parameterValue required="false" variableLength="false">SwitchParameter</command:parameterValue>
        </command:parameter>
        <command:parameter require="false" variableLength="false" globbing="false" pipelineInput="false" postion="0">
          <maml:name>parameters</maml:name>         
          <command:parameterValue required="false" variableLength="false">Hashtable</command:parameterValue>
        </command:parameter>
        <command:parameter require="false" variableLength="false" globbing="false" pipelineInput="false" postion="0">
          <maml:name>properties</maml:name>         
          <command:parameterValue required="false" variableLength="false">Hashtable</command:parameterValue>
        </command:parameter>
        <command:parameter require="false" variableLength="false" globbing="false" pipelineInput="false" postion="0">
          <maml:name>nologo</maml:name>         
          <command:parameterValue required="false" variableLength="false">SwitchParameter</command:parameterValue>
        </command:parameter>
     </command:syntaxItem>
   </command:syntax>

Taken from the psake help file.

Do not worry. They know there is a problem and are thinking about improving it. It is not like they are just going to totally ignore it for eight years or something.

And yes I will look into the PSCX module mentioned in that link, but in the meantime I am going to cry.

Wednesday, December 3, 2014

Raid Setup

In Search of Speed

As I mentioned in a previous post, saying that my HDD’s volume group was on /dev/sdb1 is not quite true. Although I wanted to have a large scratch space for virtual machines and renderings, I did not want sacrifice too much on speed. Both of those operations benefit from faster disks. The cost of ssds in the 512GB - 1TB range is quite prohibitive, so I sought a compromise.

RAID

Striping

I began to investigate RAID solutions. Of course there is the obvious RAID 0 array, but I have never been a fan of striping or tying two disks together in a non-redundant way. RAID 0 doubles your risk of disk failure and makes your setup more complicated. I guess I am uneasy with it because one time back in about 2002 I got a new Seagate drive and rather than making it a new E: drive, I extended my existing volume onto it. About 2 months later, the drive started to fail and I had a very difficult time getting it out of the volume without losing all my data. Having a bad drive take out a good one is maddening.

Hardware vs Software

Before I got much further, I realized I would have to answer the question of hardware or software raid. After a quick bit of googling, some ServerFault questions and a couple of blogs, I decided on software. This blog post had a lot to do with convincing me:
http://www.chriscowley.me.uk/blog/2013/04/07/stop-the-hate-on-software-raid/

Mirroring

At this point I started to look into RAID 1 (mirroring) and btrfs. I quickly discarded the idea of btrfs because I am running Kubuntu 12.04 LTS and everything I read said I should be using a more recent kernel. I am willing to patiently wait for it to come to me in the next LTS release.

What I found regarding Linux software RAID 1 is kind of surprising. All over the web you can find benchmarks that show, counter-intuitively, that RAID 1 is not any faster that a single disk. I am only referring to reads, as it is obvious that since you write to both disks it cannot be faster in that regard. It seems most people assume, like I did, that Linux software RAID 1 would read from both disks in parallel and therefore reads could peak out at 2x.

After a little more investigate, I found out that because the data is not striped, Linux only loads data from a single disk for an individual read operation. It will use both disks in parallel for multiple read operations. So for a single large file, it will read at 1x. For two large files it can read up to 2x.

RAID 5

Since I was going for speed, that meant RAID 1 was out. With that, I started looking at RAID 5. The immediate problem with RAID 5 is the cost. With a minimum of 3 disks, the smallest array possible already costs more than a 256GB SSD. I need more than 256GB, but any solution that approaches the cost of a 512GB SSD would favour the SSD.

Unfortunately, RAID 5 with three disks has pretty dismal write speeds. Although I am mainly focusing on reads, I would like to keep the write speeds up too. I believe that adding another disk to the array would increase both the read and write speeds, but at four 1TB drives you are smack dab in the 512GB SSD price range. I would certainly have more disk space but I honestly do not need 3TB. Finally, RAID 5 suffers from long rebuild times during which, other drives can fail.

RAID 10

Enter RAID 10 to the picture. RAID 10 is striping over a mirrored set. Not to be confused with mirroring over striped sets. RAID 10 is fast. Reads are akin to striping and writes go at the speed of one disk. Also, rebuild times are much faster than RAID 5. Here come the downsides though. Since the disks are in a mirrored configuration, you lose 50% of the total disk capacity. Second, it requires a minimum of four disks. I don’t care about the lost disk space; I doubt I would have used it anyway. But the cost issue raising its head again was disheartening.

The whole thing began to bug me at this point. Why did RAID 1 not get the read speeds of RAID 10 (note: I hadn’t found the reason at this point)? Why did RAID 10 require four disks? I kept thinking about it and I was sure that there should be a way to configure two disks into something like a RAID 10 array that would get the same read speeds. I realized that you should be able to segment the drives into two sections each. You could mirror the data of the section of the first disk to one of the second and vice versa. You should then be able to mirror the sets.

Tasty Cake

That is when the light bulb went off. I had seen something like this when I was reading about all the Linux software raid types. I went searching again and found my holy grail:

Linux kernel software RAID 1+0 f2 layout

Aka RAID 10 far layout (with two sections). It builds a RAID 10 array over two disks.

RAID 10 F2

With this setup, the read speeds are similar to striping and the write speeds are just slightly slower than a single disk.

It turns out you can build these things in a bunch of crazy ways. You can specify the number of disks (k), copies of the data (n), and sections (f). So you could have 2 copies over 3 disks, 3 copies over 4 disk with 3 sections, etc…

Once again, software raid shows its value. Without the constraints that hardware enforce, you are free to setup your system as you please.

Epilog

I have also summarized this post as an answer to a question on serverfault: https://serverfault.com/questions/158168/slow-software-raid?rq=1

In my next post I am going to talk about performance testing my setup and a few surprises I found.

Wednesday, November 26, 2014

The Joy of Working with a "Supported" Linux Device

In Search of a WiFi Adapter

After getting an Azio keyboard, I learned my lesson. Always check to make sure a device will work with Linux. Because I was moving to a suite that only had WiFi, I was going to need to get an adapter for my workstation. After a fair bit of searching, I settled on the Asus USB N13:

enter image description here

I plugged the device into my computer and Kubuntu immediately recognized it. A few minutes later, I was on Internet. A few minutes after that, I was not. On and off this thing went, like an Internet yo-yo. Additionally, every time it connected it wanted the WiFi password again.

After searching around quite a bit, it became apparent that behaviour I was seeing was a widely known problem with the kernel driver.

Linux Drivers

My first thought was to return this thing and get something that was better supported. Unfortunately there were not any better options available to me and who knew if they would work. Apparently “Supports Linux” is a vague thing.

So I downloaded the driver from Asus’s site and tried to build it. That failed with the following:

dep_service.h:49:29: fatal error: linux/smp_lock.h: No such file or directory
#include <linux/smp_lock.h>

Since the device has a rtl8192cu chipset in it, I headed over to Realtek’s website to download their version of the driver. Right away I knew I probably was out of luck. Their website says that the driver supports Linux Kernel 2.6.18 ~ 3.9. I am running Kubuntu 14.04, which has kernel version 3.13.

I decided to try compiling it anyway, but was not surprised when I got an error. The compiler was complaining that proc_dir_entry did not exist. After a bit of search, I found that proc_dir_entry had moved to /fs/proc/internal.h. It was formerly in /linux/fs_proc.h to /fs/proc/internal.h. Turns out that file was not in my kernel headers, so I had to get the kernel source:

apt-get source linux 

Then I copied the internal.h to /usr/src/linux-headers-$(uname -r)/fs/proc. I then modified the source file to include the header. After recompiling, I got the following error:

os_dep/linux/os_intfs.c:313:3: error: implicit declaration of function ‘create_proc_entry’ [-Werror=implicit-function-declaration]
rtw_proc=create_proc_entry(rtw_proc_name, S_IFDIR, init_net.proc_net);

It turns out that create_proc_entry has been deprecated in favour of proc_create. I tried changing the call, but unsurprisingly, the interface had changed too. At that point I gave up on the Linux driver.

NDISWrapper

So I went back to the Realtek site and downloaded the Windows driver, hoping to use NDISWrapper to load them. I do not know a lot about NDISWrapper, so I downloaded the GTK frontend:

sudo apt install ndisgtk

Figuring the oldest driver interface would be the most reliable, I went for the WinXP 32-bit driver first. It immediately told me that it was an invalid driver. I decided to jump over the notoriously flaky Vista drivers and go for the Win7 32-bit driver. That also seemed to be invalid. It turns out that going for the best driver was silly. I, of course, needed a 64-bit driver for my 64-bit OS.

Knowing that WinXP 64-bit drivers also fairly hit and miss, I went straight for the 64-bit Win7 driver. This driver loaded, but failed to work. Looking in dmesg there is no error. It just fails silently.

After searching and searching, I finally found this Ask Ubuntu question:
http://askubuntu.com/questions/246236/compile-and-install-rtl8192cu-driver

User mchid points to a github repo that finally gave me a working driver:
https://github.com/pvaret/rtl8192cu-fixes

It appears that the owner of the repo simply removed all the proc code from the driver.

Conclusion

Why does the out of the box Linux driver suck so bad? Why is it not dropped in favour of the GPL one written by Realtek? Having two drivers that both do not work is asinine.

Wednesday, November 19, 2014

Installing Azio Keyboard Module with DKMS

Final Chapter in the Keyboard Saga

Last week I saw a pending kernel update and I decided enough was enough. It was time to get my Azio keyboard driver working with DKMS and stop the insanity.

It turns out that using DKMS is one of the those things that ends up being a lot easier to do that you think it will be. I am so used to easy things being hard with Linux, that I forget that some hard things are easy.

I started with the Community Help Wiki article on DKMS. They have a good sample dkms.conf file that I started from:

MAKE="make -C src/ KERNELDIR=/lib/modules/${kernelver}/build"
CLEAN="make -C src/ clean"
BUILT_MODULE_NAME=awesome
BUILT_MODULE_LOCATION=src/
PACKAGE_NAME=awesome
PACKAGE_VERSION=1.1
REMAKE_INITRD=yes

I also have a driver on my system, for a USB network adapter, that uses DKMS. It’s the rt8192 for the Realtek chipset.

I took the two sample config files and merged them together, removing the duplicate lines. Then I commented out the lines that were exclusive to one file or the other and modified the common lines to match my project. Finally, I ran man dkms and began researching what the directives on each of the commented lines did.

This is what I came up with:

PACKAGE_NAME=aziokbd
PACKAGE_VERSION=1.0.0
BUILT_MODULE_NAME[0]=aziokbd
DEST_MODULE_LOCATION[0]="/kernel/drivers/input/keyboard"
AUTOINSTALL="yes"

See how simple it is?

Next I modified my make file to build/install the DKMS module. Again, I copied from the rt8192 driver. Here’s the final Makefile target:

dkms:  clean
    rm -rf /usr/src/$(MODULE_NAME)-1.0.0
    mkdir /usr/src/$(MODULE_NAME)-1.0.0 -p
    cp . /usr/src/$(MODULE_NAME)-1.0.0 -a
    rm -rf /usr/src/$(MODULE_NAME)-1.0.0/.hg
    dkms add -m $(MODULE_NAME) -v 1.0.0
    dkms build -m $(MODULE_NAME) -v 1.0.0
    dkms install -m $(MODULE_NAME) -v 1.0.0 --force

Remind me to add a version variable!

Thanks to Dylan Slavin’s awesome contribution, the driver now has a nice install.sh script to get users up and running with minimal effort.

Go and get it.

Wednesday, November 12, 2014

Generating Documentation with Markdown and Pandoc

Introduction

Over the years I have written a lot of documentation. I would say that about 98% of it has been in Microsoft Word. The other 2% has been written in text, usually a readme.txt. I generally use text when I need a break from Word. (It turns out that I have been using a convention that is very similar to Markdown for my text documents and did not know it)

Problems with Word

For me, Word has been a necessary evil. I do not feel that it is a great tool for documentation. I find that I spend too much time and get too distracted with formatting. In particular, converting code to fixed-width consumes a lot of time.

For clarification: I am referring to documentation written by developers for developers on the same team. I am not referring to API documentation for external developers or end user documentation.

The other large issue I have with Word is that the file format is binary. I firmly believe that documentation should live as close to the source code as possible. For this reason I prefer storing documents in source control over an external wiki (or some lesser repository). But that means putting binary files in source control, and as you likely know by now, that causes problems with branching and merging. Specifically, most source control systems do not know how to merge binary files.

One of the reasons that I believe documentation should live with source code is specifically the case where you are branching the code. Take feature development for example. Suppose that a code change in a branch causes the documentation to change. If you have stored your documentation external to the code you are now faced with a dilemma. Do you change the doc to reflect the as-is state or the to-be? One or the other will be wrong. How will people know? Technically, you could put both in. However, once you merge you will have to remember to remove the old documentation. (Of course, the problem only gets worse if you have additional branches)

Now suppose that a bug fix causes a change to the documentation in the mainline branch. Again you are faced with the problem of deciding where this change goes. On the other hand, if it is in source control you are in a situation where the documentation needs to be merged. This brings us back to the problem with Word’s binary files. Again, they cannot be merged, and speaking from experience, large documents are very hard to merge manually.

Down the Hill We Go

Further options seem to go from bad to worse. I have seen Word documents stored on shared network drives. To me this is the worst of the worst. You are still stuck with Word, but now you have no version control at all. Furthermore, a strange thing seems to happen in this case: people stop collaborating. Suddenly, rather than change the document themselves, people start emailing changes the original author. It is a peculiar behaviour I have noticed.

Then there are all the external repositories, things like SharePoint. You do get the versioning back, but still lose the branching and merging capability. Another worst of the worst is the SharePoint wiki. Even more cumbersome to use than Word. At that point you are better putting Word documents in source control. Or alternatively, getting a usable wiki system. Or painting on cave walls.

In summary, the my order of preference:

  1. A branchable/mergable format in source control
  2. A binary file in source control
  3. Wiki
  4. Sharepoint (or similar) document repository
  5. Cave drawings
  6. Sharepoint wiki
  7. Network share

Alternatives to Word

Several alternatives to Word exist, however very few are available “out of the box” in most organizations. That has lead me in the direction of text with some sort of markup.

Text

I only recently encountered Markdown. Prior to that I was using my own syntax that was quite similar (which is not surprising given that they both have the same source: text email conventions). Marking up text is good, but not great. It can be branched and merged, but _I am italics_ does not scream out italics to everyone.

HTML

Another option is writing HTML. Again it is text with markup. However, HTML has two problems:

  1. If you think writing formatting Word documents is a pain, give HTML a whorl. (I suppose you could use an editor, but I am picturing hand written)
  2. It is not the input format for the final document. It is the document.

Now the second point is sort of moot if you think of a browser as the document viewer. There is not much difference between loading an HTML file in Chrome and loading a PDF in Acrobat. However, this does differ from the experience that you get with a tool like…

LaTeX

I finally got fed up with Word last year and began to look for a replacement. The idea of writing in text and generating a PDF (or some such document) was where I kept landing. Since Tex and LaTex are king, that is where I looked.

Things never really got off the ground with me and LaTeX. There are two connected issues I have with it:

  1. The syntax is complex. Not overly complex, but complex enough.
  2. Because of #1, I could not see getting it absorbed into the organization I was working for. Remember, I want to store the text files in source control.

PostScript

The last thing I looked at was writing PostScript by hand. This way I would store them in source control, but everyone else would think they were PDFs. However, PostScript is a little too cumbersome to write by hand (RTF would have the same issue).

At this point I put my search on hiatus. I had spent enough time, in vain, looking for alternatives. It was time to get back to work and that meant suffering through Word.

Enter Markdown

Introduction

It has been about six months since I had to write any developer documentation. Last week I wrote a bit of documentation for my current client and realized I did not know where to put it. I fired off a quick email to my manager asking where such things should live.
His answer:

You can create some Word docs… We can check those into TFS or put them up onto a SharePoint.

I might have cringed a bit.

Markdown

Here I was, once again looking at my old nemesis. To the web I went. I quickly found this discussion on StackOverflow:

http://stackoverflow.com/questions/12537/what-tools-are-used-to-write-documentation

The answer from Colonel Panic, in particular, caught my eye:

I write in Markdown, the same formatting syntax we use on Stack Overflow. Because the documents are plain text, they can live alongside code in version control. That’s useful.

I render the documents to HTML and PDF with the swiss army knife Pandoc. With a short stylesheet, these look better than documents from word processors.

Well now, what have we here? This is perfect! A simple markup that I already know and the ability to convert them to the format bosses love. I was sure that PDF would be an acceptable format but quick check of the website revealed that pandoc also supports conversion to DOCX (and about 25 other formats).

Pandoc

I downloaded and installed the Windows msi on my machine. Loading PowerShell, I found that it was not in the path. The documentation implies that it should just be there, so I checked the path from the system settings and found it was there. I am not sure why PowerShell was not picking it up. So… when in doubt, reboot.

Next I created a simple Readme.md and ran

pandoc Readme.md -o Readme.docx

And sure enough I have my Word doc. I could not be happier.

Next I tried

pandoc Readme.md -o Readme.pdf

Unfortunately, that resulted in the following error:

pandoc.exe: pdflatex not found. pdflatex is needed for pdf output.

First I found a blog recommending I download protext.exe from http://tug.ctan.org/tex-archive/systems/win32/protext/ The file is 1.7GB. Something smells fishy. If there is not a copy of Debian Linux in there I am going to say it is a little too big for my taste.

Then I landed back at the pandoc installation page, where is says

For PDF output, you’ll also need to install LaTeX. We recommend MiKTeX.

I opted for the 64-bit Net installer to see if I could trim down the download a bit. Still, 158MB is better than 1.7GB (11.01 times better to be somewhat exact). I chose the basic install and picked a mirror nearby. In the end I have no idea if that saved anything. I am guessing not. I still feel that it is way too heavy of a requirement for another application. I also have a hard time believing all that weight is necessary. For reference, the source for txt2pdf. A more comparable example would be wkhtmltopdf that clocks in at 13MB. I digress…

After installing it I once again had to reboot (I tried logging out but Windows just sat at the logging out screen until I rebooted). After rebooting I ran:

pandoc Readme.md -o Readme.pdf

This time MikTeX popped up a few times asking to install additional packages. After that, I had my PDF.

Conclusion

Now I just need to figure out how to do the same thing with Visio. For reference, this video pretty much sums up my experience using visio. I think it might be more annoying to use than iTunes.

Wednesday, November 5, 2014

TFS: Remapping a Folder not in Your Workspace

A week ago I created a solution through Visual Studio. It put the project in the typical place: C:\Users\swoogan\Documents\Visual Studio 2012\Projects. After sketching out a rough draft, I added the solution to source control. The area in TFS that I added it to is already mapped to another location on my harddrive via a Workspace.

Some time later I went to open the solution from Windows Explorer and I could not find it. I was looking in the local folder defined by my workspace and not my projects folder, as I was thinking about where it is in source control, not where I created it from.

When I looked in Source Control Explorer it was clear that it was mapped to the Visual Studio 2012\Projects that I had originally created it in. What I wanted to do was wipe that mapping out and download it to it’s proper location. Somehow TFS was overriding my Workspace mapping.

I quickly checked my Workspace definition for the folder, but it was not listed. VS/TFS seemed to be storing the mapping somewhere else. I vaguely remember that TFS had that capability going back to at least VS 2010. However, I also recall that in VS 2010 you could right-click the folder in Source Control Explorer and there was a Mappings option there. Hunting all over the VS 2012 did not reveal similar functionality. I am sure it is in there somewhere, but I could not find it.

I did a brief search online, but the immediate results dealt with manipulating Workspaces. This was a very specific problem and I knew the cause. I also knew finding the solution via searching was going to be tricky.

Therefore, I first decided to try out an idea I had. I figured that if I could disconnect the Projects folder copy and Get Latest, TFS might re-download the folder to the folder mapped by my Workspace.

There is a hidden “feature” in Visual Studio for unmapping a folder from source control. For whatever reason they did not include a way to do this easily. Once you create a Workspace mapping for a folder and do a Get Latest there is no obvious way to undo this action. Deleting the files from Source Control Explorer also deletes them from TFS. Deleting them from the filesystem just confuses TFS; it thinks they are still there.

To unmap a folder from TFS, you must use the Get Specific Version feature. They have moved this around on various versions of VS, but basically you will find it by right-clicking. It may be buried in the Advanced submenu. From there you change Version->Type to Changeset and enter 1 in the Changeset field. Finally click the Get button. This deletes the local files and greys out the folder in Source Control Explorer.

enter image description here

I immediately noticed that the mapping was gone in Source Control Explorer and the path that it was listing was the folder that I wanted to use. I was then able to do a Get Latest on the folder and download all the files to the correct place.

enter image description here

Wednesday, October 15, 2014

PowerShell Oddities, Take 4

I ran into another PowerShell oddity today. This one wasted a lot of time. It boils down to the fact that you can index anything. If you use [0] on something that is not a collection, it will return itself. If you use anything else, it will return nothing. Again, this was buried deep in a nested code that was called from functions and it took a long time to figure out what was going on.

Take the following:

$settings = @{"Env1" = "VarValue1"; "Env2" = "VarValue2" }
Write-Output "Count: $($settings.Values.Count)"
Write-Output "Value 0: '$($settings.Values[0])'"
Write-Output "Value 1: '$($settings.Values[1])'"

It sure looks like you are getting a proper collection. Values, is indeed a collection, which makes this result all the more confusing:

Count: 2
Value 0 : 'VarValue2 VarValue1'
Value 1 : ''

Solution:

$values = $setting.Value.Values -as [string[]]
Write-Output "Count: $($values.Count)"
Write-Output "Value 0: '$($values[0])'"
Write-Output "Value 1: '$($values[1])'"

Output:

Count: 2
Value 0 : 'VarValue2'
Value 1 : 'VarValue1'

The weird thing is that I swear I tried doing an implicit cast before I posted my question to http://stackoverflow.com

After posting my question I realized I had not tried an explicit cast. So, I tried that and it worked. That seemed weird, so I tried the implicit cast again and it also worked. I guess I am going crazy. For reference here is the implicit cast:

[string[]]$values = $setting.Value.Values

Wednesday, October 8, 2014

Command 'nuget spec' Sucks for Dlls

Not So Nice

The NuGet documentation says that you can create a nuspec file from an existing DLL if you run the following command:

nuget spec [path to dll]

Unfortunately, the output is less than satisfying:

<?xml version="1.0"?>
<package >
    <metadata>
        <id>SomeAwesomeDll.dll</id>
        <version>1.0.0</version>
        <authors>swoogan</authors>
        <owners>swoogan</owners>
        <licenseUrl>http://LICENSE_URL_HERE_OR_DELETE_THIS_LINE</licenseUrl>
        <projectUrl>http://PROJECT_URL_HERE_OR_DELETE_THIS_LINE</projectUrl>
        <iconUrl>http://ICON_URL_HERE_OR_DELETE_THIS_LINE</iconUrl>
        <requireLicenseAcceptance>false</requireLicenseAcceptance>
        <description>Package description</description>
        <releaseNotes>Summary of changes made in this release of the package.</releaseNotes>
        <copyright>Copyright 2014</copyright>
        <tags>Tag1 Tag2</tags>
        <dependencies>
        <dependency id="SampleDependency" version="1.0" />
        </dependencies>
    </metadata>
</package>

Note: the version is always 1.0.0, because it does not bother to extract the version number from the assembly.

All and all not what I was hoping for!

The three glaring problems are:

  1. It does not add the version, as mentioned above
  2. It does not add <files> section with the name of the DLL you pointed too!!!
  3. As a result of the second point, it does not produce a valid nuspec file that can be packaged.

A Better Way

Here is a preferable default template:

<?xml version="1.0" encoding="utf-8"?>
<package mlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
    <metadata>
        <id>[[DllNameWithoutExtension]]</id>
        <version>[[Version]]</version>
        <authors>swoogan</authors>
        <requireLicenseAcceptance>false</requireLicenseAcceptance>
        <description>[[DllNameWithoutExtension]]</description>        
    </metadata>
    <files>
      <file src="[[DllName]]" target="lib\[[DllName]]" />
    </files>
</package>

Using the above template and a powershell template engine that I adapted from
http://bricelam.net/2012/09/simple-template-engine-for-powershell.html

   function Merge-Tokens
   {
       <#
       .SYNOPSIS
       Replaces tokens in a template
       .DESCRIPTION
       Replaces tokens found in a template file with values from the supplied hashtable
       .EXAMPLE
       Merge-Tokens -Path mytemplate.xml -Tokens @{ Variable = Value }
       .EXAMPLE
       Merge-Tokens -Path mytemplate.xml -Tokens @{ Variable = Value } -OutputPath newfile.xml
       .PARAMETER Path
       Path to the template file to replace the tokens in
       .PARAMETER Tokens
       Hashtable with the names of the tokens in the file and their replacement values
       .PARAMETER OutputPath
       Path to write the output to. If not supplied, the replacement is returned as a string
       #>
       [CmdletBinding()]
       param 
       (
           [Parameter(
               Mandatory=$True,
               Position=0)]
           [string] $Path, 

           [Parameter(
               Mandatory=$True,
               Position=1)]
           [hashtable] $Tokens,

           [Parameter(
               Mandatory=$False,
               Position=2)]
           [string] $OutputPath
       )

       $template = Get-Content $Path -Raw

       $output = [regex]::Replace(
           $template,
           '\[\[(?\w+)\]\]',
           {
               param($match)
               $tokenName = $match.Groups['tokenName'].Value
               return $Tokens[$tokenName]
           })

       if ($OutputPath -ne "") {
           Set-Content -Path $OutputPath -Value $output
       } else {
           Write-Output $output
       }
   }

I was able to write the following script:

   param (
       [string] $Path
   )

   Import-Module .\Merge-Tokens.ps1

   $nuspecTemplate = "$PSScriptRoot\nuspec.tpl"

   $fileInfo = gci $Path

   $tokens =  @{ 
                DllName = $fileInfo.Name
                Version = $fileInfo.VersionInfo.ProductVersion
                DllNameWithoutExtension = $fileInfo.BaseName
               }

   $outputPath = ($fileInfo.Directory.FullName + "\" + $fileInfo.BaseName + ".nuspec" )

   Merge-Tokens -Path $nuspecTemplate -Tokens $tokens -OutputPath $outputPath
  

In this case, I stored the above sane template in nuspec.tpl. Notice that I passed the -Raw parameter to Get-Content this causes it to get the entire file as a single string rather than an array of strings. This helps [regex]::Replace return a string that preserves the newlines found in the original file.

Putting it all together, I was able to generate nuget packages from dozens of existing DLLs stored on a network share.

Wednesday, October 1, 2014

What the Heck Happened to my LTS?

In around May I updated my workstation and my wife’s laptop up to Kubuntu LTS 14.04. I had lots of trouble with each version of Kubuntu leading up to 12.04 LTS. There was always something that did not work and I would wait very impatiently for the next release hoping it would fix all my issues. While they usually did, I invariably ended up with some new issue or issues. When 12.04 finally came out it was the first time everything worked. As they say, if it isn’t broke don’t fix it.

And I did not. That is until 14.04 came out. Along with it being another LTS (long term service) release, it had updates for a few applications that included features I wanted.

The weird thing was, ever since the release, my wife’s laptop seemed to need a lot of updates. Since it was a new release I did not think too much of it. New versions often have a lot of updates in the first couple of months. Also, because I updated my workstation a little later (I figured why not throw the wife to the wolves first) I could not compare the two machines directly.

Well, in the last couple of weeks we have been having a lot of trouble with the wife’s machine. After one update, local DNS stopped working. After another, her X-server would not use the nVidia drivers. Then in the last couple of days I have noticed hundreds of MBs of downloads that my machine was not getting. And kernel versions that were several ahead of mine.

I have not been keeping track of the silly codenames for Ubuntu for awhile now. After seven or eight they really start to blur together. But tonight when once again updating my wife’s machine and seeing that mine had none, I noticed utopic/main in the package list retrieval. I quickly ran an update on my machine and sure enough it was trusty/main. A quick check of the wiki and I found the problem. Utopic is from the future!!!

Sure enough:

swoogan@workstation:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 14.04.1 LTS
Release:        14.04
Codename:       trusty

swoogan@laptop:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu Utopic Unicorn (development branch)
Release:        14.10
Codename:       utopic

Somehow the last person I want running a beta release of Kubuntu, is. So that explains all the problems we have been having. How this happened remains unexplained…

Wednesday, September 24, 2014

Local DNS Stops Working After Kubuntu 14.04 Upgrade

The Case of the Disappearing Synology

So the wife informs me that she cannot access our Synology server. Normally that would not bother me but I happened to be on the other side of the country for business. Being the tech guy around the house has its annoyances but when you are 2600 km away for work it can be downright frustrating. Not one to get between the misses and her TV shows, I remoted in.

After having her verify that it was on, I began investigating. The machine normally shows up on the network as diskstation so I tried pinging it. There was no reply. Next I tried resolving the hostname. Nothing. That was really weird because before I left everything was working. External DNS seemed fine, it was just internal DNS that was not working.

Internal DNS is handled by our Asus wireless AP. Its IP address is 192.168.1.1 as it sits behind the telco’s ADSL “modem”. I connected to the Asus and verified that the diskstation was registered there as connected device. Next, I checked /etc/resolv.conf. It used to have the Asus’s IP but now it had local IP, 127.0.0.1. Since external DNS was resolving and the system was set to resolve via the localhost, that told me we are now running a DNS server on the machine. That struck me as very odd since I never set one up. There was no need to. I have a DNS server running on the Asus (and actually working).

swoogan@laptop:/etc/NetworkManager$ ps aux | grep dns
nobody    2366  0.0  0.0  38080  3540 ?        S    11:05   0:00 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground --no-hosts --bind-interfaces --pid-file=/run/sendsigs.omit.d/network-manager.dnsmasq.pid --listen-address=127.0.1.1 --conf-file=/var/run/NetworkManager/dnsmasq.conf --cache-size=0 --proxy-dnssec --enable-dbus=org.freedesktop.NetworkManager.dnsmasq --conf-dir=/etc/NetworkManager/dnsmasq.d 

Apparently I am now running a dnsmasq server locally. Very interesting.

swoogan@laptop:~/$ cat /var/run/NetworkManager/dnsmasq.conf 
swoogan@laptop:~/$ 
swoogan@laptop:~/$ ls /etc/NetworkManager/dnsmasq.d
swoogan@laptop:~/$ 

Ok, how exactly is this the configured?

I began to suspect that since her machine uses wifi, and its networking is controlled by NetworkManager, that it was doing something. So I searched online about NetworkManager, dnsmasq and find out that there is another tool in the mix called resolvconf. I had never heard of this tool.

What is this? The Microsoft school of networking? Need 17 components, with 17 points of failure, to get something simple working?

So I continue down the rabbit’s hole:

swoogan@laptop:~$ ls /etc/resolvconf
interface-order  resolv.conf.d  update.d  update-libc.d
swoogan@laptop:~$ cd /etc/resolvconf/resolv.conf.d
swoogan@laptop:/etc/resolvconf/resolv.conf.d$ ls
base  head  original
swoogan@laptop:/etc/resolvconf/resolv.conf.d$ cat base
swoogan@laptop:/etc/resolvconf/resolv.conf.d$
swoogan@laptop:/etc/resolvconf/resolv.conf.d$ cat head
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# Generated by NetworkManager
leha@laptop:/etc/resolvconf/resolv.conf.d$ cat original
domain gateway.2wire.net
search gateway.2wire.net
nameserver 172.16.1.254

Well is that not interesting? That is the information that used to be in my /etc/resolv.conf before I bought the Asus modem. After the asus modem, dhcp changed it to 192.168.1.1. Where the heck did it get this from? And why now, several months later?

swoogan@laptop:/etc/resolvconf/resolv.conf.d$ sudo mv original ~/
swoogan@laptop:/etc/resolvconf/resolv.conf.d$ ls
base  head
swoogan@laptop:/etc/resolvconf/resolv.conf.d$ echo "nameserver 192.168.1.1" | sudo tee tail
swoogan@laptop:/etc/resolvconf/resolv.conf.d$ ls
base  head  tail
swoogan@laptop:/etc/resolvconf/resolv.conf.d$ cat tail
nameserver 192.168.1.1

I just guessed at the name tail, given head and base. After that, the host diskstation was resolving. I do not know how or when this happened but it was after the upgrade to 14.04. The weird thing is that it was a while after the upgrade.

Wednesday, September 17, 2014

Swap is Good

Was going to take a screenshot showing how I really almost need an upgrade when things went off the rails…

Dangerously close to running out of memory

Suddenly my computer stopped responding. Then, after a long time, I got a message from Chrome that it could not allocate any more memory. I thought that was odd since I have 32GB of swap space. That is when I noticed (as you may have already) that my System Monitor was telling me that there is “No swap space available”. Interesting…

I don't think those should be commented

Not sure what I was doing or when, but clearly I forgot to undo it. I uncommented those two lines and executed:

$ sudo swapon -a

enter image description here

That is much better. Shortly after, about half a gig of memory was swapped out and things started working a lot more smoothly.

Saturday, September 13, 2014

Azio L70 Keyboard Linux Driver, The Implementation

Implementing the Azio Driver

At this point I was ready implementing the Azio driver, so I copied usbkbd.c to a file called aziokbd.c and began making edits there. I left the driver in drivers/hid/usbhid to make compiling easier. Obviously I changed the script from last time to work with a module named aziokbd instead of usbkbd.

This is where developing in a virtual machine had a hidden benefit. I could easily toggle the USB passthru from the VirtualBox menu and thereby simulate plugging and unplugging the keyboard from the guest machine. Unfortunately this hid a problem from me that I will get to later.

To implement the driver, I just had to change the lines in usb_kbd_irq to report the correct keycodes for the bytes coming in from the hardware with input_report_key. I already had the pattern worked out from reverse engineering the protocol with wireshark and usbmon.

The Azio keyboard breaks the keys up into three chunks. When the first byte in the array is 01, it is a volume control. When it is 04, that is a “regular” key like a-z, 0-9, etc… Finally, a 05 indicates the function keys and numpad. In the driver I simply broke these three cases out into their own respective if/else if branches. Since the volume only has two controls (up and down) I did not do anything fancy and just implemented them naively. For the other two cases I used the bitmasking trick and went through the remaining 7 bytes in the array.

The real trick was setting up the keycodes in the usb_kbd_keycode array such that with a little math I could easily correspond an incoming bit with the outgoing keycode. I did that by arranging the keycodes into 8 rows of 8. The first 64 elements were for byte arrays starting with 04, the second 64 were for byte arrays starting with 05 and the rest remains unused (other than the two volume keys).

With the keycodes structured this way I could index into the array by taking the position of each bit and multiplying it by it by the position of the byte I was inspecting. For 05 keys, I just had to offset the indexing by 64 to move to the next 8x8 block in the array.

Using the Driver

Once that was complete I was able to compile and begin using my driver. As a matter of fact, I was already using the driver by this point. About the last half of the driver development was done with the Azio keyboard and my driver. Whenever I encountered a key that was not yet implemented I used the secondary keyboard. That would cause enough pain to implement the key. The implementation outlined above was the result of some refactoring and not the original algorithm. The only thing left was to get the LEDs for the lock keys working.

This was a pretty exhilarating milestone in my little project. At this point I ditched the VM and moved development to my workstation proper. This was when I discovered a second thing that was not working right. This is the issue I was referring to earlier, that the VM hid from me. The generic usbhid driver was always grabbing the keyboard first and the azio driver was not loading. Even after running modprobe aziokbd, my driver was not getting access to the physical device.

ZOMG! Quirks are Quirky

This turned into a massive time sink. It is one that I am not sure I have escaped even to this day. If you search online for blacklisting a USB device you will find a lot of other people searching online for how to blacklist a USB device. Nobody really seems to know. In fact there appear to be two ways of doing depending on if the driver is compiled into the kernel or as a module. What you will find is that there is this thing called USB quirks. What you will not find is a consistent, well documented, and clear way to apply a “quirk”.

Unfortunately, even though I have got this working, it still feel as though I do not have it nailed down. Blacklisting works passing a option to the usbhid driver called “quirks”. The first part of the option’s value is the 16-bit USB vendor id, the second part is the 16-bit product id and third part is the “the u32 quirks value”. You can read the sum total of the documentation on this, that exists in the entire world, on lines 178-188 of hid-quirks.c. What are the valid u32 quirks values and what do the values mean? Apparently nobody knows. If you know where they are documented, please email me. I would very much like to know. There are just faint whispers on the wind that this is how you do it and some people have had success.

The USB vendor and product ids are easily obtained by running lsusb -v and finding your device (assuming it is plugged in). Many places on the web will tell you that the magic number is 0x0004. I am here to emphatically tell you that 0x0004 DOES NOT WORK… EXCEPT WHEN IT DOES!. Honestly, at this point I do not know what to tell anyone.

In sum, the command looks like this:

quirks=0x0c45:0x7603:[MAGIC_NUMBER]

You can pass it to the driver on the commandline by placing it after the driver name when calling modprobe, like so:

sudo modprobe usbhid quirks=0x0c45:0x7603:[MAGIC_NUMBER]

Since the usbhid driver will already be loaded, the full command is:

sudo rmmod usbhid && sudo modprobe usbhid quirks=0x0c45:0x7603:[MAGIC_NUMBER]

This is a good way to test it out and make sure that you have the quirk right, but eventually you will want this thing to just work at boot up. To do that, you put the quirks into file in /etc/modprobe.d. I created the file usbhid.conf with the following contents:

options usbhid quirks=0x0c45:0x7603:[MAGIC_NUMBER]

Here is the weird and confusing part. On my VM I had success with the magic number of 0x0007. To this day I can go back through my bash command history and see where I issued it many times. Furthermore, if I look at my /etc/modprobe.d/usbhid.conf it has the following line:

options usbhid quirks=0x0c45:0x7603:0x0007

It is working on my VM as I write this. I can passthru the L70 keyboard and reboot the VM and it works.

Transitioning to the Workstation

For some reason when I switched to my development workstation the quirk was not working. At that point I just sort of gave up. I would just load the driver with the commandline (except it was slightly more complicated because I had to also unload and load my mouse driver) and then sleep my machine.

Eventually that became a hassle and I got tired of having two keyboards attached to the computer and I sat down one night with the goal of solving it once and for all. I spent another several hours searching and loading and tweaking before I was ready to give up. I thought why does this work on the VM and not my desktop? Although I swear I copied the original file from the VM, I thought it time to compare the two. Sure enough I noticed the magic number was different. So my workstation’s /etc/modprobe.d/usbhid.conf looked like this:

options usbhid quirks=0x0c45:0x7603:0x0004 

I never did notice that I was using 0x0007 on the commandline but the file was using 0x0004. When I changed the four to seven it suddenly started working.

I know you are thinking at this point you are kind of an idiot, but to this day I am sure that I would have started from the same working point on the VM and that I only began researching a second time when it did not work. However, I cannot rule out the notion that I put that stupid 4 in there to begin with and that was the problem the whole time.

Damn you Quirks!

Now here is where it gets interesting. The other day I upgraded to kernel 3.13.0-15 and my keyboard stopped working. Although I had much better things to do that night I spent the evening trying to figure out why, hours went by and I felt like it was groundhog’s day. But this time was a little different. Nothing would let me load that driver. I never figured it out and finally went to bed.

The next day I saw there were updates and one of them was a new kernel, 3.13.0-16. I installed, rebuilt the driver and loaded it, but the keyboard was still not working. Looking at the dmesg trace I could see that the usbhid driver was grabbing it before the azio driver was loaded. This was not supposed to be happening with the quirk in the config file. Since I only rebooted about 700 times in the last two days I figured What the heck? I will change that seven to a four. It’s about the only thing I haven’t tried. You already know it worked, right? So here I am, typing this blog post with a usbhid.conf that looks like this:

swoogan@workstation:~$ cat /etc/modprobe.d/usbhid.conf 
#options usbhid quirks=0x0c45:0x7603:0x0007
options usbhid quirks=0x0c45:0x7603:0x0004

Let’s just say I am waiting for the day where I will be switching those two around. I still find it hard to believe that the command that did not work now works and that I have two different quirks on the two machines. It is worth noting that the VM uses a much older kernel.

Lighting up the LEDs

Figuring out the LEDs was a little tricky. Again I did not know where in the driver that I should be looking at. There are a couple of places where the constants LED_NUML, LED_CAPSL, and LED_SCROLLLare used so I littered the area with printk statements. After more and more printk statements and toggling the lock keys a few dozen times, I narrowed it down to the line kbd->cr->wIndex = cpu_to_le16(interface->desc.bInterfaceNumber);. It seemed that desc.bInterfaceNumberwas not holding the value that should be passed in. After little more tinkering, I got the LEDs to work by simply hardcoding 0 instead. The final line is

kbd->cr->wIndex = cpu_to_le16(0);

I will be honest and say that I do not know why that works or if it really does work in all cases. But it seems to work.

Building the Driver “Out of Tree”

To build a Linux driver outside of the kernel source tree you just need an appropriate makefile. I just created a folder in my standard development area on my machine for the Azio driver. I then moved my aziokbd.c file into it and created a Makefile. To be honest, I shamelessly copied someone else’s makefile. I do not even remember where I got it from.

The only thing I did was changed whatever was in obj-m to be aziokbd.o and added an install target:

install:
    cp aziokbd.ko /lib/modules/$(shell uname -r)/kernel/drivers/input/keyboard
    echo 'aziokbd' >> /etc/modules
    depmod

Final Steps

Now we get to today. I am using my Azio L70 keyboard daily and quiet enjoying the fact that I wrote the driver for it. However, there are two tasks I still have to work on:

  1. Fix the Meta key. It was working but has recently stopped functioning.
  2. DKMS

DKMS is dynamic kernel module support, which is a way for source code modules to be built dynamically when a new kernel is installed. If your module is not in the kernel, it is not included with system updates. If you have built it from source, it only gets built for a specific version of the kernel. This means that without DKMS you have to rebuild it every time you do a kernel upgrade. In my case it is particularly cumbersome because my keyboard is blacklisted from the generic usbhid, so after a kernel update it stops working.

If you are interested, check out the driver project page.

You can clone the repostoriy with:

hg clone https://bitbucket.org/Swoogan/aziokbd

Wednesday, September 10, 2014

Azio L70 Keyboard Linux Driver, The Setup

Introduction

In parallel with reverse engineering the keyboard protocol I began to investigate how to implement a USB driver for Linux. I assumed that someone had already written a blog post about it and I could just follow their instructions. While there are a few out there, there are not as many as you would think.

The first thing that you will find when searching for how to write a USB driver is that there are two types, kernel mode and user mode. Many USB devices can be operated in user mode. Things like cameras, dart guns, fans, etc… are all candidates. Keyboards on the other hand, not so much. Well, as long as you like to use your keyboard for things like booting, operating grub, logging in, and whatnot.

Most of the information out there points you in the direction of making a user mode driver. When you find someone asking how to implement a USB driver they are quickly steered in the direction of writing a user mode one. Which is great for them, as they are much simpler to implement but not great if you really need to implement a kernel level driver.

Linux USB Drivers

I realized that I was going to have to dig deeper and really understand how USB in general works and specifically how USB drivers work at the kernel level. Thankfully there is a terrific resource for that. Do not be put off by its age, it is still very relevant:

Programming Guide for Linux USB Device Drivers By Detlef Fliegl

Not a very inspired title but you have to love it when people get to the point.

I read the entire document. It really demystifies a lot of the aspects of USB. In particular, the most important part is section 2 where it explains the device driver framework and the data structures used.

Along with Detlef’s document, I used Matthias Vallentin’s excellent blog post on Writing a Linus Kernel Driver for an Unknown USB Device from 2007. I have to say, re-reading his article for this blog post makes me feel like I have a long way to go in terms of blogging skills. In spite of the fact that Matthias was writing a driver for a dart gun, and I a keyboard, he clearly has a deeper understanding of the underlying driver mechanics.

Some similar information can be found in the Linux Magazine article Writing an Input Module and Michael Opdenacker’s slides on Linux USB drivers.

Since I predominantly learn by example, it was time to dig into some code. This truly is the beauty of open source. I cannot imagine trying to do something like this in a closed source ecosystem.

Getting Started


Development Environment

First, I set up Kubuntu in a VirtualBox VM. I was worried that I might make a mistake with the driver and bring my whole machine down, so isolating it in a VM seemed prudent. Next I connected a second keyboard to my system. That way when I passed the Azio keyboard through to the guest OS I would still be able to interact with the host machine.

To start, I downloaded the Linux source code to my development machine. The command on Kubuntu is:

apt-get source linux

This will download the kernel source to the current working directory and apply all of Ubuntu’s patches.

You also need to make sure you have all the build tooling installed. Nowadays it comes down to a single command:

sudo apt-get build-dep linux-image-`uname -r`

I then began spelunking around the kernel source code. The drivers directory seemed like a good place to start. Indeed, I found two files in particular that were instrumental in getting my own driver implemented. The first is the generic USB keyboard driver found at drivers/hid/usbhid/usbkbd.cand the second is the Sega Dreamcast keyboard driver found at drivers/input/keyboard/maple_keyb.c

Digging into the Existing Drivers

I found a kernel function called printk that allows you to write messages from the driver. I littered the existing usbkbd driver with printk statements to figure out where and what I would need to change in order to get the keyboard working. The messages are available from the dmesg command. On Kubuntu they are also written to /var/log/dmesg so I was able to load the driver, run

tail -f /var/log/dmesg

and watch for the debugging statements.

Compiling the Existing Driver

The real trick was compiling the little bugger. I did not want to build the entire kernel as that is very time consuming (and unnecessary). I did not even want to build all the drivers, or hid drivers. I just wanted to build the usbkbd.c driver and load it. After a lot of searching I found that you can build it with the following command:

make modules SUBDIRS=drivers/hid/usbhid

Sweet, just the one sub directory! Just load the module with:

sudo insmod drivers/hid/usbhid/usbkbd.ko

And promptly get the following error:

insmod: error inserting ‘usbkbd.ko’: -1 Invalid module format

After lots and lots and lots of searching, with a bunch of red herrings thrown in, I found that it is not really the wrong format, it is just that the version of my precompiled kernel and the version of the module were not in sync. I found the solution in the Kernel Module Programming Guide, section 2.8 Building modules for a precompiled kernel. I needed to add Ubuntu’s version suffix. In my cases I was running patch 56 with the generic kernel, so I had to add EXTRAVERSION=-56-generic

With that problem solved I could, for the first time, load a kernel module with my edits and peer into its inner workings. I began making rapid edits whereby I was unloading, compiling, and re-loading the module in rapid succession. I created a script in the root of the Linux source tree, called rebuild, with the following contents:

#!/bin/sh

make EXTRAVERSION=-56-generic modules SUBDIRS=drivers/hid/usbhid 0=~/linux-3.2.0
sudo rmmod usbkbd
sudo insmod drivers/hid/usbhid/usbkbd.ko

*the 0= just points to the root of the Linux source tree

Do not forget to chmod +x ./rebuild to make it executable.

Understanding the Generic Driver

Generic driver, usbkbd.c, from the Linux kernel source.

The function where the magic happens is static void usb_kbd_irq(struct urb *urb). It is executed with every USB interrupt (see my previous post in this series for a more detailed description of USB interrupts). The urb struct is the USB Request Block and it holds all the information about the keypress (in this case). The function first checks the status of the URB. There are several statuses upon which it simply returns. Once that gate is cleared, the actual key code handling is executed.

The driver stores the interrupt’s key codes in the URB’s context pointer. There are two byte arrays new and old that hold the current and previous key codes, respectively. At the beginning of function the urb->context pointer is copied to the local variable kbd.

struct usb_kbd *kbd = urb->context;

All of the keycodes are stored in a 256 byte array, usb_kbd_keycode, declared earlier in the driver. Finally, input.h includes a function to report the keycode the kernel called input_report_key. The first argument is the keyboard device pointer, the second is the keycode and the third is either a 1 or 0 depending if the key is down or up.

The driver contains two loops that execute to determine the key codes and their states. The first loop, uses a neat little C trick that I employed too: (kbd->new[0] >> i) & 1It takes the first byte in the keycode, bit shifts it by 0 through 7 and masks the result with 1. If the mask results in a 0 the key is up and 1 it is down, so it just reports that to the kernel. The actual keycodes in the array are offset by 224, so it adds that to i when indexing usb_kbd_keycode. These keys represent the modifiers keys like Alt and Ctrl.

for (i = 2; i < 8; i++) {
 if  (kbd->old[i] > 3 && memscan(kbd->new + 2, kbd->old[i], 6) == kbd->new + 8) ...
  input_report_key(kbd->dev, usb_kbd_keycode[kbd->old[i]], 0); ...
 if (kbd->new[i] > 3 && memscan(kbd->old + 2, kbd->new[i], 6) == kbd->old + 8) ...
  input_report_key(kbd->dev, usb_kbd_keycode[kbd->old[i]], 1); ...
}
The last part of the function copies the incoming keycode into the old array for the next iteration.
 
memcpy(kbd->old, kbd->new, 8);
Once I had a firm understanding of how the existing driver worked, I was able to implement my own. I will cover that in the next and final post in this series.

Wednesday, September 3, 2014

Reverse Engineering the Azio L70 Keyboard Protocol

To recap, I bought an Azio L70 Gaming Keyboard and discovered it did not work with Linux. I set out write a kernel driver for it; starting by capturing the usb packets with usbmon and WireShark.

When I finally figured out where the actual key-codes were in the packets it was not very hard to figure out the pattern. I had both a standard usb keyboard (that functioned in Linux) and my new L70 both attached to my computer. I could strike keys on either and watch the different patterns.

The Left Over Data portion of the usb packets were 16 hex digits long (8 bytes). The regluar keyboard sent packets like so:

a -> 00 00 00 00 00 00 00 01
b -> 00 00 00 00 00 00 00 02
c -> 00 00 00 00 00 00 00 03
d -> 00 00 00 00 00 00 00 04

The Azio was sending packets like this:

a -> 04 00 01 00 00 00 00 00
b -> 04 00 02 00 00 00 00 00
c -> 04 00 04 00 00 00 00 00
d -> 04 00 08 00 00 00 00 00

Once the pattern became obvious, so did the reason why the keyboard requires a device-specific driver. In fact, there was a hint right on the packaging. The L70 is billed as a gaming keyboard with “n-key rollover”. Not only that, but this rollover functioned over USB.

When I bought the keyboard and started on this little endeavor I did not even know what rollover was, let alone n-key rollover. For those not in the know, I will attempt to explain.

First a little history lesson…

At one time, PCs used a port called PS/2 for keyboard and mouse input. The PS/2 port used a BIOS-based interrupt mechanism to report key-presses to the system. When at key was pressed the keyboard interrupted the computer with the key code. If you smashed down 10 keys, the keyboard, insomuch as it had internal buffer to maintain the information, would dutifully report each key as unique sequential interrupt. Therefore, all keyboards had n-key rollover, or the ability to press N keys at once and have the OS receive them correctly.

Then came USB keyboards and mice. USB is still a serial protocol but unlike PS/2 it is a polling-based protocol. Rather than being interrupted by the device, the OS is responsible for constantly checking the bus to see if there is any information on it. This causes problems for rollover, aka multiple key-presses, on USB keyboards. What happens is that the key presses can pile up, and in fact, essentially change. At least that is my understanding. For example, take the keyboard input above. Key A sends a key code of 1. B and C send 2 and 3 respectively. Therefore, pressing A and B simultaneously might cause the system to see a 3 (or C) rather than the separate key presses.

The only way to work around this limitation is to write a different protocol. One where every key-press is unique and no combination of keys will produce another key’s code. This is exactly what Azio did. The 1,2,4,8,… pattern seen above is obvious to anyone familiar with bit masking.

Other companies take another approach. They include a PS/2 converter and advertise their product as having, most typically, 6 key rollover on USB and n-key rollover on PS/2 (with the adapter). This is the route that both DAS and Code took. Consequently they work with Linux and the standard USB keyboard driver.

One thing that still bugs me is that I never determined how it was that the keyboard functioned as much as it did. Why, when it was giving so different of input, did the vast majority of keys function? Remember, it was only the Ctrl, Alt and Meta (Windows) keys that did not operate.

Once I had the protocol reverse engineered it was then a matter of writing a Linux USB device driver. I will cover that in a later post.

Friday, August 22, 2014

Capturing the Azio Keyboard Data

By the time I had replaced every component of my computer, including the mouse, I thought hey, why not the keyboard too? As I mentioned in the first part of this series of posts, I got an Azio L70 Gaming Keyboard. The last thing I expected was for it to not work on Linux. To be honest, it did not even occur to me that might be a possibility. I scrutinized every single piece of hardware I bought for my new computer, making sure they would all place nice with Kubuntu, except the keyboard.

Mistakenly I thought all keyboards used a standard protocol. Although, the device reported itself as a standard HID compliant device, the Linux usbhid driver was not receiving all key-presses. In particular the Ctrl, Alt and Meta (Windows) keys did nothing.

After verifying on Azio’s website that Linux really was not supported, I made the decision to attempt to write a driver for it. The first step toward my goal was reverse engineering the protocol. Again, it clearly was not standard, so what was it?

In order to understand what was going on I needed some way to read the raw I/O that was going over the USB bus. After a brief bit of searching, I found out that with the usbmon driver, Wireshark can do just that.

See: http://macgyverque.net/2012/12/03/monitoring-usb-traffic-using-wireshark-under-linux/

There were a few hoops to jump through in order to get things to work. First, ensure that you have the usbmon kernel module installed. I will leave it as an exercise for the reader to determine the specific instructions for their distro. You will also need to insert the module into the kernel, if it is not by default. On Kubuntu I ran:

$ lsmod | grep usbmon
$ sudo modprobe usbmon

When you first run Wireshark you will not see any interfaces to capture. You need extra priviledges to do that. This makes sense since you are accessing the hardware at a pretty low level and could read other users’ io. So close it down and run:

$ sudo wireshark

You will get an error right off the hop, and a tip about running as an unpriviledged user (I am guessing they disable lua as it is a programming language and we all know what happens when you run macros as a superuser, *cough* MS Office *cough*)

Lua Error

Unless you are going to do a lot of capturing, you can do what I did and just ignore the error. At which point, you will get the next dialog, this time a warning:

I am sure there *is* a better way. But reading a doc would take time and I have got better (and more fun) things to do.

[Aside: they need to sick a UX designer on these dialogs. They are all messed up]

Now you can actually start a capture. You do this by going to the Capture menu and clicking the Interfaces option. This will bring up ridiculously sized dialog that you will first have to make bigger (every time!). Once that annoyance is out of the way, you will see all of your usb and network interfaces. There are often many usb interfaces and it can be a little tricky to find which is the device you are looking for. Basically you will want to interact with the device and watch the Packets column on the dialog. You might have to try a couple, but basically you are looking for the one who’s packet count goes up while using the device in question.

Finding the right usbmon interface

When you have narrowed it down to an interface you want to try, just click the start button and that will begin the capture process.

Now the fun part begins.

The hardest part of this whole process was figuring out which of the various packets had the info I was looking for. First, because driver responds back the the keyboard with each keypress, you have to figure out who’s who. In other words, which packets are from the keyboard and which are from your computer. Once that is sorted out, you have to figure out which packets the keyboard are sending have the keypresses in them (there are a bunch of control packets in the mix) and finally you have to find out what part of the packet has the code for the key that was pressed.

Example usbmon capture

I am a little embarrassed to think how much time it actually took me. Once I had it figured out, it seemed so obvious, but for some reason I just was not picking up on the pattern at first. Everything I needed was in the Leftover Capture Data field.

Once I had narrowed down where in the packet the key code was I began the tedious process of clicking each and every key on the keyboard and figuring out the keycode pattern. Additionally, I had to decypher the system for the Caps, Num and Scroll locks and their associated LEDs. They are slightly different because the driver must maintain the state of each lock and signal that back to the keyboard.

How I did all that will be covered in the next post.

Thursday, August 14, 2014

Azio Keyboard

When I bought my new computer, I slowly but eventually replaced every component. With each new component I got, another suddenly seemed in need of replacing. When all was said and done, I had replaced everything but my keyboard. On one last shopping trip I figured why not finish the job and get a keyboard too?

For about a year I have been looking into mechanical keyboards and questioning if I should buy one. A lot of people say that if you are going to use it to interface with the computer constantly, it is worth spending top dollar on a keyboard. However, the $100+ price tags have always scared me off. Especially when you can get a Logitech special for $9.

I have been eyeing up the Das Keyboard and the Code. I particularly like the Code 87 key model. With the built-in keyboard tray on my desk, when I am sitting center to the desk, the keyboard is actually offset slightly left to make room for my mouse. Without the numpad, which I do not use very often, I could center the keyboard with my body. Unfortunately, these keyboards are very expensive, are almost always sold out, and have limited availability in Canada. Finally, I only use the computer at home for a few hours a day.
Nice narrow keyboard

Following my typical methodology, I go to http://ncix.ca and compare what is on sale that week. I decided to get a higher end non-mechnanical keyboard. I really wanted something with media keys, or the like, because when I have my headphones on for gaming, I have no way to control the volume when a game is full-screen. I ended up settling on the Azio L70 Gaming Keyboard for a really good price.

Azio L70

I got to tell you, I really like this keyboard. It is a lot better than the Logitech $9 specials. It is way heavier and more sturdy. It stays in place nicely. The keys have a much better feel and they press and respond like no other keyboard I have used. I immediately felt like I could type faster on it.

There was just one little problem… This keyboard does not work on Linux. The volume knob (which I love) and the standard keys all work, but the ctrl, meta, alt and menu keys all do nothing. After searching their website, it was confirmed. They do not support Linux at all.

It was not going to be worth it to send it back. Worst case scenario, I would take it to work and use it on my Windows machine. But I thought, what if just maybe, I could write a driver for it? I have never written a Linux driver, or worked with USB, or written in C, but how hard could it be?

Wednesday, August 6, 2014

PowerShell Oddities, Take 3

I ran into a weird problem today. I was banging my head against the wall for a little while before I kind of figured out what is going wrong and how to fix it. If you are like me, you would expect the following code to create an array with two strings:

$foo = "Test"
[string[]] @($foo + "_Suffix1", $foo + "_Suffix2")

It even looks like it does. The output is:

Test_Suffix1 Test_Suffix2

But look again. Let’s index into the array:

$foo = "Test"
[string[]] @($foo + "_Suffix1", $foo + "_Suffix2")[0]

As you can see it is both elements concatenated together, not two elements:

Test_Suffix1 Test_Suffix2

I fixed it by change the code to the following:

$foo = "Test"
[string[]] @("$foo`_Suffix1", "$foo`_Suffix2")

Here the output is different, and correct:

Test_Suffix1
Test_Suffix2

Again, if we index into the array:

$foo = "Test"
[string[]] @("$foo`_Suffix1", "$foo`_Suffix2")[0]

Output:

Test_Suffix1

Here is the part that really threw me off:

$foo = "Test" 
([string[]] @($foo + "_Suffix1", $foo + "_Suffix2")).GetType() 
([string[]] @("$foo`_Suffix1", "$foo`_Suffix2")).GetType()
IsPublic IsSerial Name                                     BaseType
-------- -------- ----                                     --------
True     True     String[]                                 System.Array

IsPublic IsSerial Name                                     BaseType
-------- -------- ----                                     --------
True     True     String[]                                 System.Array

You are getting your array.

That made it hard to figure out because the code was actually buried deep in a sequence of calls. In the former, you get an array with everything stuffed into the first element. In the latter, you actually get two elements.

What is really interesting is that given the above, you would think the following would be the same as the code that worked, but in fact it acts like the failing code:

$foo = "Test" 
[string[]] @($foo + "`_Suffix1", $foo + "`_Suffix2")[0]

Output:

Test_Suffix1 Test_Suffix2

Clearly what I think is going on, is not. Let’s dig in a little further…

See, I was assuming that the _ was throwing the concatenation off somehow. Which stands to reason when you try something like this:

$foo = "Test" 
[string[]] @("$foo_Suffix1", "$foo_Suffix2")

In which case, the string parser sees those as all one variable.

Let’s try the original with some extra braces:

$foo = "Test"
[string[]] @(($foo + "_Suffix1"), ($foo + "_Suffix2"))

Output:

Test_Suffix1
Test_Suffix2

A little more experimentation:

PS C:\> [string[]] @("asfsadf", "asfdsdf")
asfsadf
asfdsdf
PS C:\> [string[]] @("asdf" + "asfsadf", "asfdsdf")
asdfasfsadf asfdsdf

So, PowerShell concatenation changes the meaning of the comma. This is going to require further investigation. My hunch is that the + takes precedence over the , and what you are actually doing is passing an array to the concatenation operator.

In other words, what I think is going on is:

[string[]] @(("asdf" + "asfsadf"), "asfdsdf")

But what is actually going on:

[string[]] @("asdf" + ("asfsadf", "asfdsdf"))

Definitely something to be aware of.