Wednesday, June 3, 2015

Changing the Date on a Series of Git Commits

Fixing Your Mistakes

Sometimes you make a mistake when committing to Git. You’ve either committed the changes with the wrong author tag or at the wrong time. For example, you do a bunch of work and then forget to commit before going on holiday. If you commit when you get back, and if you’re a pendant, you might want to change the dates to when you actually did the work.

I looked into doing this and the various pieces of information I found around the web never seemed to work on its own. I had to cobble together several sources before I got it to work. Hopefully this will save someone the trouble in the future

You can access the Git command line from Visual Studio by selecting Changes in the Team Explorer window, then select Open Command Prompt from the Actions menu.

The Setup

You can start by reviewing the previous commits. For example

 git log -n 4

commit 9d3f11eeaec8751e386bfcb1dbd4fa9267603afd
Author: Colin Svingen <something@somewhere.com>
Date:   Mon Jun 1 17:26:53 2015 -0400

    Fixed a bug...

commit 65ad3df245852a5f8d14a314ee63194687eeb25b
Author: Colin Svingen <something@somewhere.com>
Date:   Mon Jun 1 17:26:21 2015 -0400

    This will change the world!!!1!!

commit 0eb45721f4d0bd1a630bfac8df73ff12c53efb9d
Author: Colin Svingen <something@somewhere.com>
Date:   Mon Jun 1 10:54:07 2015 -0400

    I impress even myself!

commit f51ced7692f8509b8b5eb2c47c65cb64aec9f415
Author: Colin Svingen <something@somewhere.com>
Date:   Mon Jun 1 10:49:30 2015 -0400

    Wow, what a crazy feature!

The important part of the information we need are the commit hashes. To get just the hashes for the last four commits run:

git log -n 4 --pretty=format:"%H"

9d3f11eeaec8751e386bfcb1dbd4fa9267603afd
65ad3df245852a5f8d14a314ee63194687eeb25b
0eb45721f4d0bd1a630bfac8df73ff12c53efb9d
f51ced7692f8509b8b5eb2c47c65cb64aec9f415

Changing the Dates

Now what you need to do is rewrite the history of your branch.

Note: Do not rewrite the history of your repository if you have already shared it. Only do this on a local copy that you have never pushed.

At this point you should make a copy of your local repository as a backup. This could all end in disaster.

To rewrite the history we will use the filter-branch command in Git.

The general form for filter-branch is:

git filter-branch --env-filter "<shell code>"

The whole shell code part is what makes this difficult on Windows. If you have Cygwin, you are probably better off using it as I believe it would be more straight forward. The problem with the Windows command prompt is that the shell code is Bash code, but the escaping and quoting between Windows and Bash gets a little dicey.

The following (don’t run this) would change the date on the all the commits:

git filter-branch --env-filter "GIT_AUTHOR_DATE='Mon May 28 10:49:30 2015 -0400'; GIT_COMMITTER_DATE='Mon May 28 10:49:30 2015 -0400';"

If you ignored my warning and ran it anyway, or made another mistake, you can restore the backup:

git reset --hard refs/original/refs/heads/master

However, what is needed is a way to change specific commits. To do that, you need to wrap the --env-filter in some logic. Specifically an if statement:

git filter-branch --env-filter "if test $GIT_COMMIT = 'f51ced7692f8509b8b5eb2c47c65cb64aec9f415'; then GIT_AUTHOR_DATE='Mon May 28 10:49:30 2015 -0400'; GIT_COMMITTER_DATE='Mon May 28 10:49:30 2015 -0400'; fi"

Note: On Windows this has to be all on one line. If you break it up, it does not work (unlike on Linux).

If you have to run more than one filter-branch, you will need to add the -f switch to force overwritting of the local backup.

When you run this command it will rewrite all of the future commits. That means that they will get new commit ids and the old ones will be invalid, so if you are rewriting multiple commits you need to manually loop. For example (continuing from the previous commands):

git log -n 3 --pretty=format:"%H"

32a62db0c350e8009477ceadbcf99a9b4529647f
7c97b135de3bb4c61c4c9ebea9580cc195e9dd26
3b0f476ca46d925b37990784f578bad424206e65

git filter-branch -f --env-filter "if test $GIT_COMMIT = '3b0f476ca46d925b37990784f578bad424206e65'; then GIT_AUTHOR_DATE='Mon May 28 10:54:07 2015 -0400'; GIT_COMMITTER_DATE='Mon May 28 10:54:07 2015 -0400'; fi"

git log -n 2 --pretty=format:"%H"

1ccf53c0b570c0303cfde658b33b950bb55cc665
578e6a450ff5318981367fe1f6f2390ce60ee045

git filter-branch -f --env-filter "if test $GIT_COMMIT = '578e6a450ff5318981367fe1f6f2390ce60ee045'; then GIT_AUTHOR_DATE='Mon May 28 17:26:21 2015 -0400'; GIT_COMMITTER_DATE='Mon May 28 17:26:21 2015 -0400'; fi"

And so on… When you’re done, you should be left with something like the following:

git log -n 4

commit a7f42406561f7385bd91310829bad17fdf63cc7a
Author: Colin Svingen <something@somewhere.com>
Date:   Mon May 28 17:26:53 2015 -0400

    Fixed a bug...

commit 8a6c4eb659476d4a562182e46c3511b80422eebe
Author: Colin Svingen <something@somewhere.com>
Date:   Mon May 28 17:26:21 2015 -0400

    This will change the world!!!1!!

commit 02818ff733593fbad7157db01813229e54323080
Author: Colin Svingen <something@somewhere.com>
Date:   Mon May 28 10:54:07 2015 -0400

    I impress even myself!

commit 3003d52854f2996a1d4272350bb59a7ca2069770
Author: Colin Svingen <something@somewhere.com>
Date:   Mon May 28 10:49:30 2015 -0400

    Wow, what a crazy feature!

And that is it. Now you can push your modified repo.

References

How to reset date/timestamps for one or more Git commits.
How can one change the timestamp of an old commit in Git.
Changing the timestamp of a previous Git commit.
Generate a Git patch for a specific commit.

Wednesday, May 27, 2015

Handling Errors from Non-cmdlets in PowerShell

Lately I have found myself implementing multiple builds with TFS that basically just call PowerShell scripts. One of the issues that I have with these scripts is getting good error reporting back to TFS.

Specifically, the issue comes down to the fact that non-cmdlet commandline tools generally return error codes and are incapable of throwing PowerShell errors when they fail. To get the commandline tools working like cmdlets, I took a page out of James Kovacs book (blog?) and implemented a Exec function:

function Exec([scriptblock]$cmd) { 
    $result = & $cmd 
    if ($LastExitCode -ne 0) {
        throw $result
    } 
    else {
        foreach ($line in $result) {
            Write-Host $line
        }
    }
}

Which you can simply call with:

Exec { sometool.exe "with parameters" }    

In the case of an error, it will throw an exception. Otherwise, the output will be written to standard output.

In the context of TFS, I put together a nice sequence that will fail the build if an exception is thrown.

First, I include the Exec function. You can find it on github gist. This one is slightly modified to output the last 50 lines of output when an error occurs.

Next, wrap all of your Exec calls in a try/catch like so:

try {    
    Exec { sometool.exe "with parameters" }    
} catch {
    Write-Error $_
    Exit 1
}

Now presumably, you will be calling your PowerShell script with an InvokeProcess activity from your Build Process Template. In the scope that the activity will be run, create a variable called ExitCode of type Int32:

Adding an exit code

Now set the Result field of the InvokeProcess activity to the new variable:

Setting the exit code

This will set the return code of your PowerShell build script into the variable you created.

Now add a SetBuildProperties activity after the InvokeProcess activity. In the properties of the SetBuildProperties activity, in the PropertiesToSet field check off Status. Then set the Status field to

If(ExitCode = 0, BuildStatus.Succeeded, BuildStatus.Failed)

Setting the build property

Now when your script throws an error, it will return an exit code of 1 and that will be caught by TFS which will fail the build. I acknowledge that it is ironic to convert the tool’s return code to an exception and then back to a return code, but that is what is necessary to have it flow through to TFS correctly.

Wednesday, May 20, 2015

Octopus Deploy: Protecting Yourself from Variable Replacement Errors

Almost every one of my deployments with Octopus Deploy involves a step where variables are replaced in configuration files. Frequently we encounter the case where a variable has not been properly replaced. Now I am not implying there is a bug in Octopus, but rather that mistakes happen. Regularly.

Some time we misspell a variable in either Octopus or the config file, or we add a variable to the config and not Octopus, etc… The deployment succeeds, but the application doesn’t work. It’s common enough, that one of the first troubleshooting steps we perform is to search the config file for #{ and see if anything is found.

I got a little bit of time a few weeks ago and put together a script to help resolve this issue in the future.

Source code for Find-Unreplaced PowerShell cmdlet

The script will look in the files you specify for the pattern #{...}. You can also specify the folder to look in, if you want the cmdlet to search recursively, and if matches should throw an error. The output includes the full path of the file, the unreplaced variables that it found and the line number they were found on.

Here are a few examples:

Find-Unreplaced -Path $OctopusOriginalPackageDirectoryPath -Exclude *.Release.config -Recurse -TreatAsError

This will look recursively through the path your package was install to and if any unreplaced variables are found, it will throw an error. You could put this at the end of you Deploy.ps1 and this would actually cause the deployment to fail. The -Exclude *.Release.config is because we should have unreplaced variables in the in files like web.release.config.

Find-Unreplaced -Path .\ -Recurse

In this case the script is looking the current directory, and again, recursing. No files are excluded and it’s just using the default Files parameter of *.config. It will issue a warning if matches are found. You might put this in your PostDeploy.ps1

Now I commonly use this script with the -TreatAsError flag set and catch this type of error immediately during the deployment.

Wednesday, May 13, 2015

KDE Multiple Monitors

Where Did my Monitor Go?

I was having an issue with blurry fonts on my workstation. To test if it was user specific, I created a new user and logged in. After determining that it wasn’t, I logged out and deleted the user. That is when things went pear-shaped. Suddenly my multiple monitors were not working correctly. Somehow xorg or KDE had swapped the two monitors around and now my left monitor acted as though it was on the right.

I started playing around with the NVidia settings and that just made things worse. Suddenly, my two desktops had different backgrounds (where they used to be the same image). The right monitor had an image I had set months and months ago. The weird thing is that although the monitor was on and I could move my mouse to it, I could not drag any windows onto it. Xorg saw the monitor, but KDE did not. Or that is as far as I figured.

When I ran xrandr it was said the second monitor was disconnected. At this point I deleted the .kde/share/config/krandrrc file, because in the past these files have interfered with my config. Of course, that did nothing because xrandr could not even see the second monitor.

I have seen this issue before and I know it is a problem with X, so I did the only thing I could think of and deleted the xorg.conf file (if you see a pattern here… yeah, I pretty much just start deleting config files at random when stuff goes haywire). I re-ran the NVidia settings and created a new config file. That seemed to fix the issue. Running xrandr showed that both monitors were connected.

A Love Letter to KScreen

Now when I booted into KDE my desktop was cloned/mirrored. Of course I do not want two views of the same thing, so I loaded the System Settings and went to Display and Monitors.

Actually, on a side note… there used to be a different config tool that came up when you selected this. When I upgraded from 12.04 to 14.04 it disappeared. What was really awesome about that was that, apparently, it was replaced with a tool called KScreen. Unfortunately, it seems as though someone forgot to tell the Kubuntu team, because the upgrade uninstalled the old tool, but did not install KScreen. That meant that after the install there was nothing in Display and Monitors but Screensavers. I really like how they litter the upgrades with these little Easter eggs, so that every once and a while, you can take a break from getting work done and solve a neat little puzzle.

Still with me? Okay, so I loaded up Display and Monitors, which is really KScreen. Aaaaaand, only one monitor is there. You gotta love it when you solve a problem and the problem is still there. So, at first only one monitor is detected and that is why KScreen shows one. Now both are detected and it is still showing only one. What do I do now?

KScreen only showing one monitor

So I searched Google again, but this time I had to ignore all the results talking about Xorg, because I had that working. After sifting through a few results, I find someone who says that if you’re in the situation I was in, it is probably because both screens ARE ON TOP OF EACH OTHER IN KSCREEN!!! That is right, this crazy program will actually dump both monitors on top each other! I cannot imagine under what would that be helpful, Apparently that is how it represents cloned displays. I think the author(s) of this software really need to rethink the usability of this particular “feature”.

Oh there's my other monitor, duh!

Alright, I moved the screens next to each other to solve this little problem. Finally, I put my computer to sleep and went to bed myself. The next day I loaded up the computer. Since I was actually working on a Font problem… yes, maybe one day I would be able to get some real work done… I wanted to see if it was maybe my monitors that were the issue. So I brought my laptop over and connected it to one of my monitors. If anyone cares at this point, the font problem does seem related to the monitors.

So I disconnected my laptop and plugged the monitor back into my computer and guess what happened? That’s right, they are mirrored again. So I fixed them in my new favourite program. This time I logged out and back in to see if the change stuck. Nope, of course it did not. They are mirrored again. I tried changing it, logging out, back in and again they are mirrored. So, I checked for a .kde/share/config/krandrc and sure enough there is not one.

So here is the deal:

  1. The old configuration tool, that was intuitive and worked, is gone.
  2. It has been replaced with KScreen, which is not installed by default.
  3. KScreen devs think that hiding one of your monitors is a good way to demonstrate that it is cloned.
  4. Settings applied in KScreen only apply to the current session and are not persisted.

So how do I get my settings persisted? I do not know how to author a krandrrc file from scratch. I do not know where they come from. I thought it was KScreen, but apparently not. I did more searching and found about 17 different ways to hack your system to run xrandr commands at bootup, login, etc… Nobody suggested writing a krandrrc file and letting KDE handle it.

One of the guides noted that if you want to get your settings to take and not be overridden by KScreen, you have to disable it as a service:

Shutting KScreen service off

I got thinking that when I boot up, everything seems fine until I log in. As in, xorg is configured correctly and KDM is working correctly (not mirrored), it’s only when I log in that things go off the rails. On a hunch, I decided to do nothing with xrandr and simply turn off the “KScreen 2” service. I figured that it is probably loading at login, not finding a config file and resetting to the defaulted cloned configuration. Not loading it should keep everything configured by X. I gave it a try and rebooted. Voila, everything is working as it should.

Conclusion

I think that maybe now I know why KScreen was not added during the upgrade. It appears that KDE is completely incapable of handling multiple monitor support for you. What was wrong with the old tool? I do not know. But KScreen is a useless in my case. The way it defaults to cloned and hides one of your monitors is asinine. So is the fact that it does not actually store your settings anywhere so that they persist between sessions. Finally, the fact that, because it doesn’t store you settings, it always stomps over your system settings is simply infuriating. To say that it is broken is an understatement.

Linux advocates are always talking about Linux on the desktop. Well, when KDE’s multiple monitor support, in 2014, is behind what you got out of the box with Windows XP in 2001, it’s obvious Linux is not “ready for the desktop”. I have been running Linux for 16 years and I can barely tolerate this nonsense. Put the average computer user through this and watch their heads explode.

Wednesday, April 29, 2015

Frameworks vs. Libraries

I got thinking about Rob Aston’s article Look ma, no frameworks and the viceral, zealous reactions it provoked. I suppose that the people defending frameworks have not had the same horrifying experiences with them as I have. [And I will be honest, those are really the only two non-Microsoft frameworks I have worked with. But if I had to chose between working with another “enterprise” framework and slamming nails in my ears, I would take the nails.]

Granted not all frameworks are as enterprisy as the two piles of cow feces I got to work with, and one should not paint a broad brush, but nonetheless I have to agree with Rob. Now maybe some of those people disagreeing have worked with really lean productive frameworks (assuming lean productive frameworks exist). On the other hand I suspect some of them are responsible for creating these monstrosities and feel the need to defend them. If you are one of those people, please leave your company name in the comments so I can make sure I never ever work there.

Things really got out of control when he went off on Angular. I still think that is one of the most hilarious pieces of tech writing I have ever read. It was like reading an Old Man Murray review. The reactions on the other hand were like reading creationist comments on an episode of SciShow.

But as I said, I got thinking about the argument. And I realized that the framework vs. library dispute is a bit of a rehash of inheritance over composition. Anyone who has used a giant framework, or pretty much any framework, will have to admit that you are deriving your application from it. In fact, frameworks often make heavy use of class-level inheritance. Take ASP.NET MVC for example, you cannot get started without first inheriting from System.Web.Mvc.Controller.

As with classes, I do not see why we would not all be in general agreement to favour composing our applications with libraries over inheriting from frameworks. All the same reasons hold true. Some of the problems with frameworks are:

  1. They are very rigid and impose a certain way of doing things. If you go off the beaten path, you will be stung. (WCF I am looking at you)
  2. They often abstract the wrong things or at the wrong times.
  3. Lock in. Oh my, the lock in. They tend to be an all or nothing proposition. As someone who has tried to migrate functionality out/off of a framework I can attest to the fact that it is nearly impossible. To the point re-writing the application can take less time.
  4. Further to #3 they can be a maintenance headache. Often functionality is buried in them, strewn across various layers.
  5. There are times where doing things with the framework is more work than without (Hey WCF! You again.)

What it comes down to is that with composition of libraries, you are a puppeteer controlling an assortment of marionettes. You are free to move them as you please, decorate them, or cut their strings if you so choose. Conversely, frameworks are the puppeteers and you are the puppet. (I wonder if that is why the same framework advocates seem to be in love with Dependency Injection? They adore inverting control?)

Maybe I am jaded by my recent experiences but it seems to me there is a really strong tendency lately for people to reach for the most complicated solution first. The sudden rampant use of DI, repositories and service layers is burying tons of simple applications under a morass of abstraction and convolution. I am not sure what happened to KISS and YAGNI, but they seemed to have died somewhere along the way.

One thing I assume is true, the people writing these monstrosities must never stick around long enough to maintain them, or I am sure they would stop. Actually, come to think of it, what these people really need to do is maintain an app built with someone else’s custom framework. Then they would see the light.

Wednesday, April 22, 2015

Yummy CherryPy

I recently resurrected an ancient project of mine called JBox. It’s an HTML/Javascript front-end for an MP3 player.

Way back in 2001 I had a headless Linux machine that I had setup as a router and firewall for my home network. Its proximity to the cable modem put it next to the TV and stereo. I soon got the idea to dump all my MP3s onto and connect it to the stereo. To play them, I would SSH onto the machine and queue them up with mpg123. That worked great for me, but no so great for my two roommates.

Teaching them SSH proved futile. Therefore, they could not skip songs or control the music in any way. We would just leave it running all the time and turn the stereo on and off. When it was playing, you got what you got. When it died, and it did so often, there was no music until I could remote it and re-queue it.

I threw about a couple of client-server models before settling on a web-based HTML interface. I looked at Icecast, building my own desktop app and protocol and streaming the sound over the network. None of it was appealing at the time. HTML was ideal since nothing had to be installed locally.

Fast-forward to today and I had this mad idea of putting it up on Github. Sure there are better ways of doing it now, but I still think it’s neat and it proves that I can (or could) program in Python. Now I have found myself getting carried away improving it.

I sought to improve it in two major ways:

  1. Convert to HTML5 and modern Javascript
  2. Make the backend more self-contained and easier to deploy.

In practice that meant:

  1. Rewriting the front-end as a single page application in AngularJS
  2. Integrating a web server into the back-end and eliminating the need for Apache, suexec and cgi-bin.

Lately I have been working with the Go programming language and one of the things that I absolutely love is the low level and elegant way that you can write web applications with it. For example, here is a minimal server from their documentation:

package main

import (
    "fmt"
    "net/http"
)

func handler(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Hi there, I love %s!", r.URL.Path[1:])
}

func main() {
    http.HandleFunc("/", handler)
    http.ListenAndServe(":8080", nil)
}

Registering handlers and writing code is such a breath of fresh air from building .NET apps buried in layers of abstraction. I wanted to get this same elegance in Python. After searching and researching Python’s copious amount of web frameworks, I came across the beautiful CherryPy project. The example from their homepage was all I needed to know.

import cherrypy

class HelloWorld(object):
    def index(self):
        return "Hello World!"
    index.exposed = True

cherrypy.quickstart(HelloWorld())

Because I was writing the front-end in AngularJS, all I really need from the back-end is a series of REST based web resources. Therefore, the first thing I needed was to incorporate a static webserver. Typically I would let nginx play this role and just have it reverse proxy to my application. However, in this case I wanted a truly self contained application. Remember, this is a “desktop” application with a HTML interface, not a website.

It actually took a bit of searching to find how to do this in the simplest way possible. In fact, I ended up sending a pull request to improve the documentation, to make it easier for the next gal/guy.

    import os
    import cherrypy

    class Root(object):
        pass

    if __name__ == '__main__':
        CONF = {
                '/': {
                    'tools.staticdir.root': os.path.abspath(os.getcwd()),
                    'tools.staticdir.on': True,
                    'tools.staticdir.dir': 'site',
                    'tools.staticdir.index': 'index.html'
                      }
        }
        cherrypy.quickstart(Root(), '/', CONF)

It is basically all in the CONF. CherryPy expects an object passed to quickstart, so we just create an empty one. For the config, we simply turn staticdir on, point it to the folder to serve (in this case ‘site’) and set the index file to index.html.

To enable my REST endpoints, I had to add a few more lines. First

SETUP = {'/': {'request.dispatch': cherrypy.dispatch.MethodDispatcher()}}

The MethodDispatcher tells CherryPy to map the request type to the corresponding method. For example, the http verb GET will map to the GET(self) method. Note that the key, '/', is relative off the mount point, not absolute from the root of the server.

Then I just mount a bunch of objects into the web server hierarchy and

cherrypy.tree.mount(songs.Songs(), '/api/songs', SETUP)
cherrypy.tree.mount(volume.Volume(), '/api/volume', SETUP)
cherrypy.tree.mount(nowplaying.NowPlaying(), '/api/nowplaying', SETUP)
...

Here I am mounting an instance of the Songs class to the absolute path /api/songs off the root of the website. It is configured to use the MethodDispatcher. And so on with the Volume and NowPlaying classes.

Here is an example of one of the REST handlers:

    import cherrypy
    from jbox.core import volume

    class Volume(object):
        def __init__(self, config):
            self.exposed = True
            self.vol = volume.Volume(config)

        @cherrypy.tools.json_out()
        def GET(self):
            return {'level': self.vol.level()}

        @cherrypy.tools.json_in()
        def PUT(self):
            json = cherrypy.request.json
            self.vol.set_level(json['level'])

I just love the simplicity of it.

There is, however, one darkside here that I am not terribly happy about. It may turn out that I am missing something, but as far as their documentation is concerned, this is how you send and receive JSON data. I find the implementation a little problematic. Without JSON, the methods would look like this:

    def GET(self):
        return self.vol.level()
        # or maybe still 
        # return {'level': self.vol.level()}

    def PUT(self, level):
        self.vol.set_level(level)

Here the PUT parameters are pulled out of the request and passed into the method. As soon as we want JSON we have to remove the parameter, mark up the method as sending/receiving JSON and then manually pull the JSON out of the request. But what if we want XML?

What I see lacking from the framework/api is Content Negotiation. That is, if the client wants JSON, it should ask for it. And get it. If it wants XML, that should be supplied too. Hard-coding my api to JSON is fine since I am the only client, but in general resources should be separate from representations. /api/volume is the resource, JSON is the representation. Ideally, it would be best to allow multiple request and response types with the same resource implementation.

Overall I am very pleased with CherryPy and am looking forward to completing my project with it. The final step is throwing in a little web socket IO with ws4py, which I may blog about down the road.

Wednesday, April 15, 2015

Deploying dacpacs with Octopus Deploy

Introduction

Microsoft’s Data-tier Application Packages (dacpacs) provide a nice mechanism for deploying your database schema and keeping your environments in sync. SSDT provides the ability to publish SQL Server projects via Visual Studio; however, in a professional setting you will want an automated and repeatable way to deploy your dacpacs. My tool of choice is Octopus Deploy. I will show you how to ingrate the two.

The SSDT Project

SSDT Sql Server projects present a bit of an issue when it comes to adding functionality to them: you can’t install NuGet packages.

This affects us in two way. The first part is that in my organization, we like to package our Octopus Deploy.ps1 and configuration transforms into a NuGet so that we can easily add them to our various projects, which we can’t do with database projects. The second issue is that you can’t install the OctoPack NuGet.

OctoPack Error

Deployment Script

To work around the first issue you must manually add a Deploy.ps1 file to the root of the project.

Adding a Deployment Script

To script the deployment of a dacpac we are going to call out to SqlPackage.exe. Here is the template that I use:

$TargetConnectionString = $OctopusParameters["ConnectionString"]
$smartDefaults = $OctopusParameters["GenerateSmartDefaults"]
$dataLoss = $OctopusParameters["BlockOnPossibleDataLoss"]

if (!$smartDefaults) {
    $smartDefaults = "False"
}

if (!$dataLoss) {
    $dataLoss = "True"
}

if (!$TargetConnectionString) {
    throw "Target connection parameter missing!"    
}

Write-Host "Deploying database now..." -NoNewline

New-Alias -Name sqlpackage -Value "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe"
sqlpackage /Action:Publish /SourceFile:"[[DACPAC_NAME]].dacpac" /tcs:$TargetConnectionString `
    /p:GenerateSmartDefaults=$smartDefaults /p:BlockOnPossibleDataLoss=$dataLoss

Write-Host "deployed"

To populate the script, you have to replace [[DACPAC_NAME]] with the name of your dacpac file and ensure the path to sqlpackage.exe is correct for the server you are deploying to.

The first three lines retrieve variables from Octopus. The first is the connection string. The next two are some common parameters that you might find helpful. Please refer to the SSDT documentation to read more about the BlockOnPossibleDataLoss and GenerateSmartDefaults options. If they are not specified, the script will default to safe values (the SSDT defaults).

Passing Additional Variables

Sometimes you need additional variables in your dacpac project. For example, suppose that your database scripts reference another database. If that database has a different name in various environments, you will need to vary this per install (note: I don’t recommend ever doing this, but sometimes that is just how things are). Because the database name changes, you need some way to pass it in from Octopus.

For example purposes let’s assume that our database is called OtherDatabase_dev in our development enviornment and OtherDatabase_prod in our production environment. Our main database contains a script that needs to reference that database, for example:

CREATE PROCEDURE [dbo].[Procedure1]
AS
    SELECT * FROM [OtherDatabse_dev].[dbo].[Customers]
RETURN 0

The first thing that we need to do change the database name to a variable. SQLCMD variables use the syntax $(Name).

CREATE PROCEDURE [dbo].[Procedure1]
AS
    SELECT * FROM [$(OtherDatabse)].[dbo].[Customer]
RETURN 0

Now if you compile your dacpac, you will get a warning that Warning 1 SQL71562: Procedure: [dbo].[Procedure1] has an unresolved reference to object [$(OtherDatabase)].[dbo].[Customer]. You can either ignore this error, or add a reference to the database.

Add a Reference

There are two ways to provide a reference to your “OtherDatabase”. If it is also part, or can be part of your solution, you can add a reference to the other project in the solution:

Add Database Reference

Once you click, the next dialog will show up:

Adding Database Reference

Here you want to select Database projects in the current solution and point it to OtherDatabase. You also want to specify where the database will be located in your environment. Your options are:

  1. Same Database
  2. Different Database, same server
  3. Different Database, different server

Here I will assume we are using the second option. To be honest, I don’t know what affect the Database name field has. It will default to the name of your project. For the Database variable field enter the variable name you wish to use.

Now if you compile solution, the warning will disappear.

The alternate way to add a database reference is to refer directly to another dacpac. I have used this when the two database projects are in different solutions. You need to do the following:

  1. Build the other database to produce a dacpac
  2. Copy the dacpac into your source tree (and add to source control).
  3. Reference the dacpac.

To reference the dacpac, follow the same steps as before except this time you want to select Data-tier Application (.dacpac) from the list of options. Browse to the dacpac file. The other options remain the same.

Dacpac Referece

Again, compiling the solution will see the warning eliminated.

Adding the Variable

Now we can actually add the variable that will get populated from Octopus. Right-click the project and select Properties. From the properties dialog, click on the SQLCMD Variables tab. In the table of variables, add your variable name:

Add Variable

You can also add a default value. I recommend putting your development value in there.

Lastly, we need to augment the deployment script to retreive the value from Octopus and pass it along to the dacpac. To the top of the retrieve the value with something similar to:

$otherDatabase = $OctopusParameters["OtherDatabase.Name"]

At the end of the sqlpackage invocation pass in the variable:

/v:OtherDatabase="$otherDatabase"

The above is SQLCMD variable syntax. You can pass as many of these to SqlPacakge as you like. The syntax is /v:Name=Value

Packaging for Octopus

Since we can’t add OctoPack to the project, we have to get creative here. How to implement the package is heavily influenced by how you will be automating your build. You might be using TFS, TeamCity, psake or something else. Essentially, you just need to get the dacpac file and the Deploy.ps1 file into a NuGet.

For example, if you were using psake, you might put octo.exe into your projects folder and call it directly:

.\octo.exe pack –id=Database –basePath=bin\Debug –version=1.0.0.0

In our environment, we use TFS Build. We run “Custom After Microsoft Common Targets” extensions as I have described previously.

With our common targets, we override the Dacpac’s version, copy the dacpac and dlls to a staging folder and then call the CreateOctoPackPackage task with our version and the location of the staging folder.

If you are interested in additional details of how we implement the custom build, please leave a comment.

Deploying with Octopus

Finally, we are ready to configure Octopus Deploy to deploy the database. If you don’t have a Project in Octopus, create one now.

Open your project and click on the Process tab:

Add step

Click Add step and select Deploy a NuGet package from the list.

In the Step details add the Step name, Machine roles and NuGet package ID. In my case, the NuGet package ID is always the .NET project name and I label the step with the same value. The machine role is also typically DatabaseServer but will, of course, be specific to your environment.

Step Details

Now click on the variables tab. There are to variables you must enter:

  1. ConnectionString
  2. OtherDatabase.Name

You can also optionally add BlockOnPossibleDataLoss and GenerateSmartDefaults.

Adding Variables

Conclusion

Dacpacs provide a pretty slick way to deploy a database schema and keep your environments in sync (although they are not without their issues, which I will write about at a later date). Deploying them with Octopus Deploy is straightforward and only requires a basic PowerShell script. Many advanced features are available SQLCMD variables and SqlPackage parameters, all of which can be controlled from the Octopus interface.

Wednesday, April 8, 2015

PowerShell: Needs a Little Help from its Friends

Poor little PowerShell… It just doesn’t get any love.

It has been around for a decade now and I still find that it has not penetrated the consciousness of .NET developers. It’s sad. There’s this huge cohort of devs that don’t know anything outside of Windows and have this very powerful tool at their disposal, but tend to be oblivious to it.

There are a multitude of crazy one-offs that you can do to improve your efficiency on Windows. But the real Power comes from its two-way integration with .NET. By two-way, I mean that you can both load .NET objects in your scripts and write script components in .NET (binary cmdlets).

The real Power comes from its two-way integration with .NET.

I know some people are not terribly happy with the programming language, but I don’t have any problems with it. Yeah it isn’t the most beautiful, but after working with it for awhile, the syntax melts away and you won’t find it any better or worse than any other programming language. Well… expect Perl and Python. Nothing’s better than Python and nothing’s worse that Perl ;-)

On the other hand, the pipeline of objects vs. text is a crazy awesome advancement from the existing paradigm. The drawback is that it increases the learning curve. I know when I first started with PowerShell I found the objects frustrating when I just wanted text. It requires you to learn all these selectors and formatting tricks to get a display of what you’re looking for.

But, at the end of the day, any scripting language is only as good a the environment it lives in. At that, sadly, is where PowerShell falls down. In fact, it really does not have much to do with PowerShell. All scripting languages suffer on Windows. Be it Perl, Python or PowerShell, Windows just doesn’t have the tooling for these “glue languages” to glue together.

Be it Perl, Python or PowerShell, Windows just doesn’t have the tooling for these “glue languages” to glue together.

In fact, PowerShell does have the most, because it can leverage .NET, COM, WMI and all the other things Microsoft has thrown at its devs over the years. But it still lives in a land of GUIs, where everything is done with mouse clicks. Here are a few examples:

  1. Add the Log on as a service right to an account. Unix admins, read that and weep. It’s a fact! You can’t script what isn’t scriptable. If you search hard enough, you will find that, yeah, it can be done. But your eyes will have to bleed in the process.
  2. You can’t even work with NTFS permissions without installing a module. That’s right, simple basic sys admin stuff. Fundamental sysadmin stuff! Just changing the ACLs on Windows is not supported out-of-the-box. It is 2015 and Microsoft still struggles to match the basic functionality that Unix has had for 40 years (see chmod).

These examples reveal the truth from Microsoft. It’s not just the .NET devs in the wild that haven’t adopted PowerShell. Remarkably, not even the developers inside Microsoft have. As long as they are stuck in the 90s do-everything-by-the-gui mentality nothing will improve. PowerShell, as a whole, will stay inferior to much more modest shells like Bash.

Hammers never show their true value in the absense of nails.

Wednesday, April 1, 2015

SSH Passphrase Chaos

Surprise, We Moved your Cheese!

Recently I stumbled onto another Easter Egg left my the K/Ubuntu team. I was trying to connect to my webserver as usual, and instead of asking me for my SSH passphrase, a strange GUI dialog I have never seen before appeared:

Passphase GUI

I have a really long, randomly-generated passphrase that I store in KWallet. As the result of a different bug, I typically add it to the ssh-agent that runs at login by manually running a script.

The first couple of times this new dialog appeared I looked up my passphrase in KWallet, pasted it into the window, and moved on with what I was doing. Looking it up was a pain and running my script that adds the key to the agent did not change anything in regards to this dialog appearing, or not.

Confused, I looked in my KWallet. The passphrase was still there. As I often do, when in doubt, I deleted it. I tried logging into my server again. When this strange app asked, I put in my passphrase and clicked the “Remember when I login” option. I was hoping it would add my passphrase back to KWallet and things would be good again.

Things Get Really Weird

Typically, when you add your passphrase to ksshaskpass and tell it to remember it, you are asked to unlock your KWallet the first time you use your key. But now I was not getting asked for my KWallet password or my key’s passphrase. I have no idea where my passphrase was stored! I revoked ksshaskpass’s permission to KWallet, so I definitely know it is not there, or at least that ksshaskpass is not the program with the dialog (it’s been years since I last saw ksshaskpass’s dialog, so I do not exactly remember what it looked like – similar though).

A few days later, I noticed the same thing happened on my laptop. Since I had added my passphrase to the mystery application on my workstation, it had become a lot harder to figure out what was going on. Now the dialog was not appearing and I had limited options to figure out the issue.

With the same issue occurring on my laptop, I was now granted the opportunity to get to the bottom of things. This time I ran the system monitor to determine what app was running. I closed the dialog and tried logging into my server while watching the system monitor. This time, when the dialog appeared, I saw an application called gcr-prompter appear.

Something with prompt in the name told me I was likely on the right track. On the command line, I ran

$ locate gcr-prompter

I found that it is at /usr/lib/gcr/gcr-prompter. Next I ran

$ dpkg -S /usr/lib/gcr/gcr-prompter

I found out that it comes from the package gcr. dpkg -S is one of my favourite features of Debian/apt!

After searching for gcr-prompter, I found out that “gcr” is the “GNOME crypto services”. I do not know where this package came from or why it is was on my machine. Both of my machines have now been messed up by it, but I do not run GNOME on either of them.

When I searched for which packages depend on it, it was clear that it is in the task kubuntu-desktop. So, it appears this is standard in Kubuntu. I wish there was some communication on these things. It would be nice if, when when you upgraded, they would warn you that behaviours or packages xyz have changed. It is so frustrating to have such random things sprung on you with each upgrade.

I searched online and nothing reveals any sort of policy change. Even when seeking out this information, you cannot find it. It is very aggravating and I am getting toward my whit’s end with these breakages. I am tired of things randomly changing and having to spend hours fixing things when I am trying to get work done.

At this point I decided to just uninstall the gcr application:

$ sudo aptitude purge gcr
$ sudo apt-get autoremove

Based on this Q&A I found out where it hides its data.

As a result, I ran:

$ rm -rf .local/share/keyrings

Then I was able to run my script ~/bin/askpass.sh, re-add my passphrase to ksshaskpass and select the option to store it in KWallet.

Correct SSH Dialog

Everything back to normal… for now.

Thursday, March 26, 2015

Deploying DNN with Octopus Deploy

Disclaimer

The process that follows is what I used to successfully deploy a DNN site with Octopus Deploy. I am not entirely convinced that it is the best way to accomplish the goal, but it does work. At the end I will lay out an a potential alternative method. However, it is not something I have implemented.

Introduction

The first step is the process was to convert the DNN website to a .NET web application. This is obviously the most controversial point in the overall methodology. Converting is both non-trivial and introduces issues when upgrading to newer versions of DNN. However we were faced with a dilemma: work DNN into our existing build and deployment process, or create a new process just for DNN. We chose the former.

Our main concern was generating the NuGet package for Octopus in an automated fashion. Since we already had tooling to do this with TFS Build, it made sense to at least try to fit DNN into that that tooling.

Converting DNN to a Web Application

Conversion

The general process of converting a website to a web application is fairly straight forward and documented at MSDN: Converting a Web Site Project to a Web Application Project in Visual Studio.

However, DNN is a very large project and after the conversion was complete there were still several build errors related to name conflicts. Essentially, DNN has multiple pages, with the same name, scattered throughout the folder hierarchy. I resolved these by changing the code-behind classes’ namespaces.

For example, there are several Login.ascx controls:

  • DesktopModules\AuthenticationServices\DNN\Login.ascx
  • DesktopModules\Admin\Authentication\Login.ascx

They are both namespaced to DotNetNuke.Modules.Admin.Authentication. I simply changed the one in AuthenticationServices to DotNetNuke.Modules.AuthenticationServices. I then changed its Login.ascx to Inherits="DotNetNuke.Modules.AuthenticationServices.Login" from Inherits="DotNetNuke.Modules.Admin.Authentication.Login". I also had to add or change using statements throughout the application to import the new namespaces. Overall there were not an undue number of these changes and it took me around an hour or two to get the entire website compiling.

Libraries

Next I took all the DLLs that the project directly referenced out of the bin folder of the website and added them to source control in a Lib folder. This folder was adjacent to my Src folder in TFS. These files will automatically get copied to the bin folder when the application is built by the “copy local” mechanics. However, there were several binary files that are required by the DNN site that it does not reference directly. What I mean is that the project will build without the reference but it will not run correctly if they are not found in the bin folder. I am not familiar with DNN, so I simply assume they are plugins of some kind.

For these I created a Binaries folder within the Src folder. So I ended up with something like

Src/
    Binaries/
        Providers/
            DotNetNuke.ASP2MenuNavigationProvider.dll
            DotNetNuke.DNNDropDownNavigationProvider.dll
            ...
        CNVelocity.dll
        DNNGo.Modules.DNNGallery.dll
        ...
Lib/
    DotNetNuke.Web.dll
    ...

In the project file I added a new target to copy the binaries to the bin folder when the project is built. I put the following code at the bottom of the csproj file:

<Target Name="CopyBinaries" AfterTargets="CopyFilesToOutputDirectory">
<ItemGroup>
  <Providers Include="$(MSBuildProjectDirectory)\Binaries\Providers\*.*" />
  <Binaries Include="$(MSBuildProjectDirectory)\Binaries\*.dll" />
</ItemGroup>
  <Copy SourceFiles="@(Binaries)" DestinationFolder="$(OutDir)" ContinueOnError="true" />
  <Copy SourceFiles="@(Providers)" DestinationFolder="$(OutDir)\Providers" ContinueOnError="true" />
</Target>

This is nice because it works from both Visual Studio and TFS Build (with its default Binaries output directory).

At this point the project can be built in both and the output is exactly what can be copied to IIS under a web application folder. The next step is getting it packaged for Octopus.

Packaging for Octopus

Packaging for Octopus is very straightforward. It really is just a nupkg with everything at the root. (I have created them by simply calling NuGet.exe on the command-line with PowerShell.)

OctoPack

The Octopus Deploy team distributes a tool, called OctoPack, for packaging your build as a nupkg. I highly recommend at least attempting to implement the build using OctoPack, before continuing down our custom route.

Extending OctoPack

As I said earlier, we have an existing process for packaging .NET projects as part of our TFS Build system. The nice thing is that it also works without TFS Build.

It boils down to hooking into a somewhat undocumented way to extend MSBuild. Essentially you call MSBuild and pass it a property pointing to another MSBuild file that you extend the build with:

msbuild.exe solution.sln /property:CustomAfterMicrosoftCommonTargets=custom.proj

This is a really elegant way to extend multiple projects or solutions without having to modify them individually.

In this case, the custom.proj file contains the code necessary to build the Octopus Deploy package. (I am going to gloss over some of the details as I did not author this part of the process.)

First you need to reference the OctoPack MSBuild tasks:

<UsingTask TaskName="OctoPack.Tasks.CreateOctoPackPackage" AssemblyFile="OctoPack\targets\OctoPack.Tasks.dll" />

Then in a custom target, call CreateOctoPackPackage:

<CreateOctoPackPackage
      NuSpecFileName="$(OctoPackNuSpecFileName)"
      ContentFiles="@(Content)"
      OutDir="$(OctoPackBinFiles)" 
      ProjectDirectory="$(MSBuildProjectDirectory)" 
      ProjectName="$(MSBuildProjectName)"
      PackageVersion="$(OctoPackPackageVersion)"
      PrimaryOutputAssembly="$(TargetPath)"
      ReleaseNotesFile="$(OctoPackReleaseNotesFile)"
      NuGetExePath="$(OctoPackNuGetExePath)"
      >
      <Output TaskParameter="Packages" ItemName="OctoPackBuiltPackages" />
      <Output TaskParameter="NuGetExePath" PropertyName="OctoPackNuGetExePath" />
</CreateOctoPackPackage>

And copy the output somewhere:

<Copy SourceFiles="@(OctoPackBuiltPackages)" DestinationFolder="$(OctoPackPublishPackageToFileShare)" Condition="'$(OctoPackPublishPackageToFileShare)' != ''" />

The above code was pulled from the default OctoPack target and modified to fit our needs. Essentially we hacked it to uses our versioning scheme and we modify the OutDir to vary depending on the project type (SSDT Database Project, Console Application, or Web Site/Service). That variation is done elsewhere in our “custom.proj” file.

Finally, you just need to call MSBuild with additional commandline parameters:

/p:RunOctoPack=true /p:OctoPackPublishPackageToFileShare="[Octopus Feed]" /p:CustomAfterMicrosoftCommonTargets="[Custom.proj]"

You can add the above to your TFS Build Defintion by entering it in the MSBuild Arguments field under Advanced:

MSBuild Arguments in TFS

Modules

Most likely you will also have custom DNN modules that need to be deployed as part of the site. Since we already converted DNN to a web application, the obvious choice for this is to package them as NuGet packages and then reference them from the DNN project.

The details of packaging .NET project as a NuGet are outside the scope of this article and are well documented elsewhere. The key is that you want to build the modules as NuGet packages and deploy them to your internal NuGet repository.

Now there are two ways to handle the content files for modules:

  1. Include it in the DNN project
  2. Include it in the module NuGet

There are drawbacks to both approaches.

In the first scenario, you physically nest the module projects within the DesktopModules folder of the DNN project. You then include the files in the DNN project and change their Build Action property to Content. You must also change their Build Action property to None in the module project, so that they do not get packaged in the NuGet.

To summarize, in the first scenario, the module content is part of the DNN project and not the NuGet. The NuGet is used solely to add the DLL file to the DNN project.

The drawback here is that the module NuGets are not truly self contained and it is a complicated process to get right. Especially if you have many developers.

Despite the downsides, we took this approach. The benefit is that developers can compile the DLLs directly into the DNN site’s bin folder and run the site locally without first packaging the NuGets.

In the second scenario, you include the module content in the NuGet package. Because of this, you have to host your module projects outside of the DNN project folder structure. If they are not stored elsewhere, the NuGet packages will be deploying the content to the exact same location it is already found. However, if the content is in the NuGet, by default it will be packaged such that it is deployed with the same layout as the .NET projects. This means that they will not be deployed into the correct location. For example, if you have the following project layout:

Module/
    Module.proj
    View.ascx
    bin/
        Module.dll

You will get a NuGet with the layout:

lib/
    net40/
        Module.dll
content/
    View.ascx

This will deposit the View.ascx in the root of your DNN site!

To override the default behaviour, you need a custom nuspec file that tells the NuGet where the files should go. For example:

<files>
    <file src="View.ascx" target="DesktopModules\Custom\Module\View.ascx" />
</files>

And herein lies the major drawback of this approach: you have to maintain custom nuspec files. Once you add the <files> section, you have to specify all the files you need to include. This becomes a pain to maintain. Furthermore, you have to build and update your modules to see changes, even on your development machine.

DNN Database

Schema

Clearly DNN also has a database to deploy. I took the extra step to include this in our deployment automation. I would say that this step is optional in the grand scheme of things, but I will describe it nonetheless.

I used the excellent Sql Server Data Tools (SSDT) to reverse engineer the database schema into a .NET project. I included that project in the DNN website solution. Technet has a nice step-by-step tutorial for reverse engineering a database with SSDT.

Deploying a dacpac is a topic on its own. However, in summary, you package the nupkg with the dacpac and a Deploy.ps1 that calls SqlPackage.exe. It is not particularly complicated.

Data

Once you have the schema deploying, the immediate next question is how to deploy the data changes. For example, installing modules, adding page sections, adding modules to pages, etc… all change the site configuration data stored in the database.

The strategy here is to have a master reference database that reflects the configuration of your production database. When a new development cycle starts, you must make a clone of this database. All configuration, throughout the cycle, is done to the clone database. When the development cycle is “complete” (are we ever really done?) and you are ready to create a release, you compare the two databases and generate a migration script. In my case, I was able to use the SSDT Data Compare tool to successfully generate these scripts.

Once you’ve generated the script, you can either deploy it manually as part of the overall deployment process, or you can add it as a Post-Deployment script in your SSDT database project. Be aware that doing so can be problematic. By default dacpacs will take any version of your database to the new version. These data scripts will only operate between two specific versions.

The final step in the cycle would be to apply the script to your reference database and re-clone it, starting the cycle again.

Would I do it Again?

So, given my disclaimer and that crazy sequence of steps, you are probably wondering if there is an easier way. I think that there may be. At this point, all I know is that the process I laid out here does work. We have deployed to production multiple times now.

However, I do suspect that leaving the out the conversion to a web application would have saved a lot of time and headache. I honestly do not see why just bundling the entire site in a NuGet package would not work. Under that scenario, I would leave the modules under the DNN site’s tree structure and just zip the entire thing up in one package. As laid out by Paul, one could simply publish the site and call NuGet.exe on it. You would have to create a nuspec file. It remains to be seen how onerous that would be (there are times where you can get away with just the metadata section).

Hooking this process into the build would be a bit more complicated. First you would have to build the modules into the site’s bin directory. Then you would have publish the website to a local folder. Finally, you would have to package the results with NuGet.exe. Since we were using TFS and TFS’s Build Process Templates are, quite possibly, the worse build system ever devised, we chose webapp conversion route. Looking back I am still undecided as to which would take more time, the conversion or the custom build process template.

Thursday, March 12, 2015

Deploying Lookup Data in Dacpacs

Introduction

Deploying lookup data with a dacpac (SSDT sqlproj) is a relatively straightforward process. SqlServer dacpacs have facilities for running scripts before and after the main deployment executes. Deploying lookup/seed data with a dacpac simply consists of adding the appropriate statements to your Post-Deployment script.

It can be accomplished by following these steps:
1. Add a Post-Deployment script to your project
2. Create a script for each table to be populated
3. Reference the script from the post-deployment script
4. Populate the script with merge statements containing your data

Adding a Post-Deployment Script to Your Project

I like to keep my pre and post deployment scripts in there own folder. To do so, simply right-click the root of the project and select Add->New Folder. Name the new folder “Scripts”.

Next, right-click the Scripts folder and select Add, then way at the bottom Script...

Adding Script

From the list of script types, pick Post-Deployment Script. I usually just call it Script.PostDeployment.sql since you can only have one Pre or Post Deployment script per project.

Script Types

Once you have added the script, look at the properties. You will notice that its Build Action is set to PostDeploy. Again, you can only have one file with this build action. However, as I will show you, you can use separate files to organize your post deployment.

Create a Script for Each Table

Now create another folder under Scripts called “LookupData”. Finally, right-click the LookupData folder and once again, add a script to you project. This time select the Script (Not in build) type and name it the same name as your table. For example, “CategoryTypes.sql”. It is important that these scripts are not included in the build as they are data-related and not schema-related. In most cases they will cause the build to fail. If you add existing scripts, from elsewhere, be sure to manually set the Build Action property to None.

Reference Your Table Scripts

Open your post deployment script in the editor. At the top, if you read the comments, you will notice that you can use SQLCMD syntax to include addition files into the post-deployment script.

Editing

As you will notice, there are red squiggle marks under the : and . characters in the script. This is because Visual Studio does not know this script is using SQLCMD syntax. To let it know and eliminate the marks, click the SQLCMD mode button:

SqlCmd

Continue adding tables scripts and referencing them from your Post-Deployment script.

Populate the Script with Merge Statements

The last step, adding Sql Merge statements to the scripts, is the most important.

SQL MERGE statements allow the scripts to be run repeatedly without changing the state of the database (they are idempotent). Merge statements basically synchronize a source and target, adding missing rows, updating existing ones (if needed) and deleting ones that do not exist in the source.

The general syntax is described at MSDN.

An example:

MERGE INTO [CategoryType] AS Target
USING (VALUES
    (1,'Turkey')
    ,(2,'Green')
    ,(3,'Melamine')
    ,(4,'Convertible')
) AS Source ([CategoryTypeID],[Name])
ON (Target.[CategoryTypeID] = Source.[CategoryTypeID])
WHEN MATCHED AND (
NULLIF(Source.[Name], Target.[Name]) IS NOT NULL OR NULLIF(Target.[Name], Source.[Name]) IS NOT NULL) THEN
 UPDATE SET
 [Name] = Source.[Name]
WHEN NOT MATCHED BY TARGET THEN
 INSERT([CategoryTypeID],[Name])
 VALUES(Source.[CategoryTypeID],Source.[Name])
WHEN NOT MATCHED BY SOURCE THEN 
 DELETE;

Of course, you can write these statements from scratch but if you are lazy like me, you will want to generate them. Most likely you have the data in an existing development or production database. There are many tools that can take such data and generate your merge statements. One such example can be found here: https://github.com/readyroll/generate-sql-merge

Keep in mind that in practice you will not likely be able to use the delete clause at will:

WHEN NOT MATCHED BY SOURCE THEN 
 DELETE;

There will likely be historical records that will reference these lookup values. You must either remove the clause or first purge all the data referencing the removed records.

Conclusion

Following these steps you will have all of your lookup and seed data stored in source control, along with your schema. You will get all the benefits of being able to track the history and changes over time and be able to automatically sync your data when deploying your dacpacs.