Wednesday, June 3, 2015

Changing the Date on a Series of Git Commits

Fixing Your Mistakes

Sometimes you make a mistake when committing to Git. You’ve either committed the changes with the wrong author tag or at the wrong time. For example, you do a bunch of work and then forget to commit before going on holiday. If you commit when you get back, and if you’re a pendant, you might want to change the dates to when you actually did the work.

I looked into doing this and the various pieces of information I found around the web never seemed to work on its own. I had to cobble together several sources before I got it to work. Hopefully this will save someone the trouble in the future

You can access the Git command line from Visual Studio by selecting Changes in the Team Explorer window, then select Open Command Prompt from the Actions menu.

The Setup

You can start by reviewing the previous commits. For example

 git log -n 4

commit 9d3f11eeaec8751e386bfcb1dbd4fa9267603afd
Author: Colin Svingen <something@somewhere.com>
Date:   Mon Jun 1 17:26:53 2015 -0400

    Fixed a bug...

commit 65ad3df245852a5f8d14a314ee63194687eeb25b
Author: Colin Svingen <something@somewhere.com>
Date:   Mon Jun 1 17:26:21 2015 -0400

    This will change the world!!!1!!

commit 0eb45721f4d0bd1a630bfac8df73ff12c53efb9d
Author: Colin Svingen <something@somewhere.com>
Date:   Mon Jun 1 10:54:07 2015 -0400

    I impress even myself!

commit f51ced7692f8509b8b5eb2c47c65cb64aec9f415
Author: Colin Svingen <something@somewhere.com>
Date:   Mon Jun 1 10:49:30 2015 -0400

    Wow, what a crazy feature!

The important part of the information we need are the commit hashes. To get just the hashes for the last four commits run:

git log -n 4 --pretty=format:"%H"

9d3f11eeaec8751e386bfcb1dbd4fa9267603afd
65ad3df245852a5f8d14a314ee63194687eeb25b
0eb45721f4d0bd1a630bfac8df73ff12c53efb9d
f51ced7692f8509b8b5eb2c47c65cb64aec9f415

Changing the Dates

Now what you need to do is rewrite the history of your branch.

Note: Do not rewrite the history of your repository if you have already shared it. Only do this on a local copy that you have never pushed.

At this point you should make a copy of your local repository as a backup. This could all end in disaster.

To rewrite the history we will use the filter-branch command in Git.

The general form for filter-branch is:

git filter-branch --env-filter "<shell code>"

The whole shell code part is what makes this difficult on Windows. If you have Cygwin, you are probably better off using it as I believe it would be more straight forward. The problem with the Windows command prompt is that the shell code is Bash code, but the escaping and quoting between Windows and Bash gets a little dicey.

The following (don’t run this) would change the date on the all the commits:

git filter-branch --env-filter "GIT_AUTHOR_DATE='Mon May 28 10:49:30 2015 -0400'; GIT_COMMITTER_DATE='Mon May 28 10:49:30 2015 -0400';"

If you ignored my warning and ran it anyway, or made another mistake, you can restore the backup:

git reset --hard refs/original/refs/heads/master

However, what is needed is a way to change specific commits. To do that, you need to wrap the --env-filter in some logic. Specifically an if statement:

git filter-branch --env-filter "if test $GIT_COMMIT = 'f51ced7692f8509b8b5eb2c47c65cb64aec9f415'; then GIT_AUTHOR_DATE='Mon May 28 10:49:30 2015 -0400'; GIT_COMMITTER_DATE='Mon May 28 10:49:30 2015 -0400'; fi"

Note: On Windows this has to be all on one line. If you break it up, it does not work (unlike on Linux).

If you have to run more than one filter-branch, you will need to add the -f switch to force overwritting of the local backup.

When you run this command it will rewrite all of the future commits. That means that they will get new commit ids and the old ones will be invalid, so if you are rewriting multiple commits you need to manually loop. For example (continuing from the previous commands):

git log -n 3 --pretty=format:"%H"

32a62db0c350e8009477ceadbcf99a9b4529647f
7c97b135de3bb4c61c4c9ebea9580cc195e9dd26
3b0f476ca46d925b37990784f578bad424206e65

git filter-branch -f --env-filter "if test $GIT_COMMIT = '3b0f476ca46d925b37990784f578bad424206e65'; then GIT_AUTHOR_DATE='Mon May 28 10:54:07 2015 -0400'; GIT_COMMITTER_DATE='Mon May 28 10:54:07 2015 -0400'; fi"

git log -n 2 --pretty=format:"%H"

1ccf53c0b570c0303cfde658b33b950bb55cc665
578e6a450ff5318981367fe1f6f2390ce60ee045

git filter-branch -f --env-filter "if test $GIT_COMMIT = '578e6a450ff5318981367fe1f6f2390ce60ee045'; then GIT_AUTHOR_DATE='Mon May 28 17:26:21 2015 -0400'; GIT_COMMITTER_DATE='Mon May 28 17:26:21 2015 -0400'; fi"

And so on… When you’re done, you should be left with something like the following:

git log -n 4

commit a7f42406561f7385bd91310829bad17fdf63cc7a
Author: Colin Svingen <something@somewhere.com>
Date:   Mon May 28 17:26:53 2015 -0400

    Fixed a bug...

commit 8a6c4eb659476d4a562182e46c3511b80422eebe
Author: Colin Svingen <something@somewhere.com>
Date:   Mon May 28 17:26:21 2015 -0400

    This will change the world!!!1!!

commit 02818ff733593fbad7157db01813229e54323080
Author: Colin Svingen <something@somewhere.com>
Date:   Mon May 28 10:54:07 2015 -0400

    I impress even myself!

commit 3003d52854f2996a1d4272350bb59a7ca2069770
Author: Colin Svingen <something@somewhere.com>
Date:   Mon May 28 10:49:30 2015 -0400

    Wow, what a crazy feature!

And that is it. Now you can push your modified repo.

References

How to reset date/timestamps for one or more Git commits.
How can one change the timestamp of an old commit in Git.
Changing the timestamp of a previous Git commit.
Generate a Git patch for a specific commit.

Wednesday, May 27, 2015

Handling Errors from Non-cmdlets in PowerShell

Lately I have found myself implementing multiple builds with TFS that basically just call PowerShell scripts. One of the issues that I have with these scripts is getting good error reporting back to TFS.

Specifically, the issue comes down to the fact that non-cmdlet commandline tools generally return error codes and are incapable of throwing PowerShell errors when they fail. To get the commandline tools working like cmdlets, I took a page out of James Kovacs book (blog?) and implemented a Exec function:

function Exec([scriptblock]$cmd) { 
    $result = & $cmd 
    if ($LastExitCode -ne 0) {
        throw $result
    } 
    else {
        foreach ($line in $result) {
            Write-Host $line
        }
    }
}

Which you can simply call with:

Exec { sometool.exe "with parameters" }    

In the case of an error, it will throw an exception. Otherwise, the output will be written to standard output.

In the context of TFS, I put together a nice sequence that will fail the build if an exception is thrown.

First, I include the Exec function. You can find it on github gist. This one is slightly modified to output the last 50 lines of output when an error occurs.

Next, wrap all of your Exec calls in a try/catch like so:

try {    
    Exec { sometool.exe "with parameters" }    
} catch {
    Write-Error $_
    Exit 1
}

Now presumably, you will be calling your PowerShell script with an InvokeProcess activity from your Build Process Template. In the scope that the activity will be run, create a variable called ExitCode of type Int32:

Adding an exit code

Now set the Result field of the InvokeProcess activity to the new variable:

Setting the exit code

This will set the return code of your PowerShell build script into the variable you created.

Now add a SetBuildProperties activity after the InvokeProcess activity. In the properties of the SetBuildProperties activity, in the PropertiesToSet field check off Status. Then set the Status field to

If(ExitCode = 0, BuildStatus.Succeeded, BuildStatus.Failed)

Setting the build property

Now when your script throws an error, it will return an exit code of 1 and that will be caught by TFS which will fail the build. I acknowledge that it is ironic to convert the tool’s return code to an exception and then back to a return code, but that is what is necessary to have it flow through to TFS correctly.

Wednesday, May 20, 2015

Octopus Deploy: Protecting Yourself from Variable Replacement Errors

Almost every one of my deployments with Octopus Deploy involves a step where variables are replaced in configuration files. Frequently we encounter the case where a variable has not been properly replaced. Now I am not implying there is a bug in Octopus, but rather that mistakes happen. Regularly.

Some time we misspell a variable in either Octopus or the config file, or we add a variable to the config and not Octopus, etc… The deployment succeeds, but the application doesn’t work. It’s common enough, that one of the first troubleshooting steps we perform is to search the config file for #{ and see if anything is found.

I got a little bit of time a few weeks ago and put together a script to help resolve this issue in the future.

Source code for Find-Unreplaced PowerShell cmdlet

The script will look in the files you specify for the pattern #{...}. You can also specify the folder to look in, if you want the cmdlet to search recursively, and if matches should throw an error. The output includes the full path of the file, the unreplaced variables that it found and the line number they were found on.

Here are a few examples:

Find-Unreplaced -Path $OctopusOriginalPackageDirectoryPath -Exclude *.Release.config -Recurse -TreatAsError

This will look recursively through the path your package was install to and if any unreplaced variables are found, it will throw an error. You could put this at the end of you Deploy.ps1 and this would actually cause the deployment to fail. The -Exclude *.Release.config is because we should have unreplaced variables in the in files like web.release.config.

Find-Unreplaced -Path .\ -Recurse

In this case the script is looking the current directory, and again, recursing. No files are excluded and it’s just using the default Files parameter of *.config. It will issue a warning if matches are found. You might put this in your PostDeploy.ps1

Now I commonly use this script with the -TreatAsError flag set and catch this type of error immediately during the deployment.

Wednesday, May 13, 2015

KDE Multiple Monitors

Where Did my Monitor Go?

I was having an issue with blurry fonts on my workstation. To test if it was user specific, I created a new user and logged in. After determining that it wasn’t, I logged out and deleted the user. That is when things went pear-shaped. Suddenly my multiple monitors were not working correctly. Somehow xorg or KDE had swapped the two monitors around and now my left monitor acted as though it was on the right.

I started playing around with the NVidia settings and that just made things worse. Suddenly, my two desktops had different backgrounds (where they used to be the same image). The right monitor had an image I had set months and months ago. The weird thing is that although the monitor was on and I could move my mouse to it, I could not drag any windows onto it. Xorg saw the monitor, but KDE did not. Or that is as far as I figured.

When I ran xrandr it was said the second monitor was disconnected. At this point I deleted the .kde/share/config/krandrrc file, because in the past these files have interfered with my config. Of course, that did nothing because xrandr could not even see the second monitor.

I have seen this issue before and I know it is a problem with X, so I did the only thing I could think of and deleted the xorg.conf file (if you see a pattern here… yeah, I pretty much just start deleting config files at random when stuff goes haywire). I re-ran the NVidia settings and created a new config file. That seemed to fix the issue. Running xrandr showed that both monitors were connected.

A Love Letter to KScreen

Now when I booted into KDE my desktop was cloned/mirrored. Of course I do not want two views of the same thing, so I loaded the System Settings and went to Display and Monitors.

Actually, on a side note… there used to be a different config tool that came up when you selected this. When I upgraded from 12.04 to 14.04 it disappeared. What was really awesome about that was that, apparently, it was replaced with a tool called KScreen. Unfortunately, it seems as though someone forgot to tell the Kubuntu team, because the upgrade uninstalled the old tool, but did not install KScreen. That meant that after the install there was nothing in Display and Monitors but Screensavers. I really like how they litter the upgrades with these little Easter eggs, so that every once and a while, you can take a break from getting work done and solve a neat little puzzle.

Still with me? Okay, so I loaded up Display and Monitors, which is really KScreen. Aaaaaand, only one monitor is there. You gotta love it when you solve a problem and the problem is still there. So, at first only one monitor is detected and that is why KScreen shows one. Now both are detected and it is still showing only one. What do I do now?

KScreen only showing one monitor

So I searched Google again, but this time I had to ignore all the results talking about Xorg, because I had that working. After sifting through a few results, I find someone who says that if you’re in the situation I was in, it is probably because both screens ARE ON TOP OF EACH OTHER IN KSCREEN!!! That is right, this crazy program will actually dump both monitors on top each other! I cannot imagine under what would that be helpful, Apparently that is how it represents cloned displays. I think the author(s) of this software really need to rethink the usability of this particular “feature”.

Oh there's my other monitor, duh!

Alright, I moved the screens next to each other to solve this little problem. Finally, I put my computer to sleep and went to bed myself. The next day I loaded up the computer. Since I was actually working on a Font problem… yes, maybe one day I would be able to get some real work done… I wanted to see if it was maybe my monitors that were the issue. So I brought my laptop over and connected it to one of my monitors. If anyone cares at this point, the font problem does seem related to the monitors.

So I disconnected my laptop and plugged the monitor back into my computer and guess what happened? That’s right, they are mirrored again. So I fixed them in my new favourite program. This time I logged out and back in to see if the change stuck. Nope, of course it did not. They are mirrored again. I tried changing it, logging out, back in and again they are mirrored. So, I checked for a .kde/share/config/krandrc and sure enough there is not one.

So here is the deal:

  1. The old configuration tool, that was intuitive and worked, is gone.
  2. It has been replaced with KScreen, which is not installed by default.
  3. KScreen devs think that hiding one of your monitors is a good way to demonstrate that it is cloned.
  4. Settings applied in KScreen only apply to the current session and are not persisted.

So how do I get my settings persisted? I do not know how to author a krandrrc file from scratch. I do not know where they come from. I thought it was KScreen, but apparently not. I did more searching and found about 17 different ways to hack your system to run xrandr commands at bootup, login, etc… Nobody suggested writing a krandrrc file and letting KDE handle it.

One of the guides noted that if you want to get your settings to take and not be overridden by KScreen, you have to disable it as a service:

Shutting KScreen service off

I got thinking that when I boot up, everything seems fine until I log in. As in, xorg is configured correctly and KDM is working correctly (not mirrored), it’s only when I log in that things go off the rails. On a hunch, I decided to do nothing with xrandr and simply turn off the “KScreen 2” service. I figured that it is probably loading at login, not finding a config file and resetting to the defaulted cloned configuration. Not loading it should keep everything configured by X. I gave it a try and rebooted. Voila, everything is working as it should.

Conclusion

I think that maybe now I know why KScreen was not added during the upgrade. It appears that KDE is completely incapable of handling multiple monitor support for you. What was wrong with the old tool? I do not know. But KScreen is a useless in my case. The way it defaults to cloned and hides one of your monitors is asinine. So is the fact that it does not actually store your settings anywhere so that they persist between sessions. Finally, the fact that, because it doesn’t store you settings, it always stomps over your system settings is simply infuriating. To say that it is broken is an understatement.

Linux advocates are always talking about Linux on the desktop. Well, when KDE’s multiple monitor support, in 2014, is behind what you got out of the box with Windows XP in 2001, it’s obvious Linux is not “ready for the desktop”. I have been running Linux for 16 years and I can barely tolerate this nonsense. Put the average computer user through this and watch their heads explode.

Wednesday, April 29, 2015

Frameworks vs. Libraries

I got thinking about Rob Aston’s article Look ma, no frameworks and the viceral, zealous reactions it provoked. I suppose that the people defending frameworks have not had the same horrifying experiences with them as I have. [And I will be honest, those are really the only two non-Microsoft frameworks I have worked with. But if I had to chose between working with another “enterprise” framework and slamming nails in my ears, I would take the nails.]

Granted not all frameworks are as enterprisy as the two piles of cow feces I got to work with, and one should not paint a broad brush, but nonetheless I have to agree with Rob. Now maybe some of those people disagreeing have worked with really lean productive frameworks (assuming lean productive frameworks exist). On the other hand I suspect some of them are responsible for creating these monstrosities and feel the need to defend them. If you are one of those people, please leave your company name in the comments so I can make sure I never ever work there.

Things really got out of control when he went off on Angular. I still think that is one of the most hilarious pieces of tech writing I have ever read. It was like reading an Old Man Murray review. The reactions on the other hand were like reading creationist comments on an episode of SciShow.

But as I said, I got thinking about the argument. And I realized that the framework vs. library dispute is a bit of a rehash of inheritance over composition. Anyone who has used a giant framework, or pretty much any framework, will have to admit that you are deriving your application from it. In fact, frameworks often make heavy use of class-level inheritance. Take ASP.NET MVC for example, you cannot get started without first inheriting from System.Web.Mvc.Controller.

As with classes, I do not see why we would not all be in general agreement to favour composing our applications with libraries over inheriting from frameworks. All the same reasons hold true. Some of the problems with frameworks are:

  1. They are very rigid and impose a certain way of doing things. If you go off the beaten path, you will be stung. (WCF I am looking at you)
  2. They often abstract the wrong things or at the wrong times.
  3. Lock in. Oh my, the lock in. They tend to be an all or nothing proposition. As someone who has tried to migrate functionality out/off of a framework I can attest to the fact that it is nearly impossible. To the point re-writing the application can take less time.
  4. Further to #3 they can be a maintenance headache. Often functionality is buried in them, strewn across various layers.
  5. There are times where doing things with the framework is more work than without (Hey WCF! You again.)

What it comes down to is that with composition of libraries, you are a puppeteer controlling an assortment of marionettes. You are free to move them as you please, decorate them, or cut their strings if you so choose. Conversely, frameworks are the puppeteers and you are the puppet. (I wonder if that is why the same framework advocates seem to be in love with Dependency Injection? They adore inverting control?)

Maybe I am jaded by my recent experiences but it seems to me there is a really strong tendency lately for people to reach for the most complicated solution first. The sudden rampant use of DI, repositories and service layers is burying tons of simple applications under a morass of abstraction and convolution. I am not sure what happened to KISS and YAGNI, but they seemed to have died somewhere along the way.

One thing I assume is true, the people writing these monstrosities must never stick around long enough to maintain them, or I am sure they would stop. Actually, come to think of it, what these people really need to do is maintain an app built with someone else’s custom framework. Then they would see the light.

Wednesday, April 22, 2015

Yummy CherryPy

I recently resurrected an ancient project of mine called JBox. It’s an HTML/Javascript front-end for an MP3 player.

Way back in 2001 I had a headless Linux machine that I had setup as a router and firewall for my home network. Its proximity to the cable modem put it next to the TV and stereo. I soon got the idea to dump all my MP3s onto and connect it to the stereo. To play them, I would SSH onto the machine and queue them up with mpg123. That worked great for me, but no so great for my two roommates.

Teaching them SSH proved futile. Therefore, they could not skip songs or control the music in any way. We would just leave it running all the time and turn the stereo on and off. When it was playing, you got what you got. When it died, and it did so often, there was no music until I could remote it and re-queue it.

I threw about a couple of client-server models before settling on a web-based HTML interface. I looked at Icecast, building my own desktop app and protocol and streaming the sound over the network. None of it was appealing at the time. HTML was ideal since nothing had to be installed locally.

Fast-forward to today and I had this mad idea of putting it up on Github. Sure there are better ways of doing it now, but I still think it’s neat and it proves that I can (or could) program in Python. Now I have found myself getting carried away improving it.

I sought to improve it in two major ways:

  1. Convert to HTML5 and modern Javascript
  2. Make the backend more self-contained and easier to deploy.

In practice that meant:

  1. Rewriting the front-end as a single page application in AngularJS
  2. Integrating a web server into the back-end and eliminating the need for Apache, suexec and cgi-bin.

Lately I have been working with the Go programming language and one of the things that I absolutely love is the low level and elegant way that you can write web applications with it. For example, here is a minimal server from their documentation:

package main

import (
    "fmt"
    "net/http"
)

func handler(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Hi there, I love %s!", r.URL.Path[1:])
}

func main() {
    http.HandleFunc("/", handler)
    http.ListenAndServe(":8080", nil)
}

Registering handlers and writing code is such a breath of fresh air from building .NET apps buried in layers of abstraction. I wanted to get this same elegance in Python. After searching and researching Python’s copious amount of web frameworks, I came across the beautiful CherryPy project. The example from their homepage was all I needed to know.

import cherrypy

class HelloWorld(object):
    def index(self):
        return "Hello World!"
    index.exposed = True

cherrypy.quickstart(HelloWorld())

Because I was writing the front-end in AngularJS, all I really need from the back-end is a series of REST based web resources. Therefore, the first thing I needed was to incorporate a static webserver. Typically I would let nginx play this role and just have it reverse proxy to my application. However, in this case I wanted a truly self contained application. Remember, this is a “desktop” application with a HTML interface, not a website.

It actually took a bit of searching to find how to do this in the simplest way possible. In fact, I ended up sending a pull request to improve the documentation, to make it easier for the next gal/guy.

    import os
    import cherrypy

    class Root(object):
        pass

    if __name__ == '__main__':
        CONF = {
                '/': {
                    'tools.staticdir.root': os.path.abspath(os.getcwd()),
                    'tools.staticdir.on': True,
                    'tools.staticdir.dir': 'site',
                    'tools.staticdir.index': 'index.html'
                      }
        }
        cherrypy.quickstart(Root(), '/', CONF)

It is basically all in the CONF. CherryPy expects an object passed to quickstart, so we just create an empty one. For the config, we simply turn staticdir on, point it to the folder to serve (in this case ‘site’) and set the index file to index.html.

To enable my REST endpoints, I had to add a few more lines. First

SETUP = {'/': {'request.dispatch': cherrypy.dispatch.MethodDispatcher()}}

The MethodDispatcher tells CherryPy to map the request type to the corresponding method. For example, the http verb GET will map to the GET(self) method. Note that the key, '/', is relative off the mount point, not absolute from the root of the server.

Then I just mount a bunch of objects into the web server hierarchy and

cherrypy.tree.mount(songs.Songs(), '/api/songs', SETUP)
cherrypy.tree.mount(volume.Volume(), '/api/volume', SETUP)
cherrypy.tree.mount(nowplaying.NowPlaying(), '/api/nowplaying', SETUP)
...

Here I am mounting an instance of the Songs class to the absolute path /api/songs off the root of the website. It is configured to use the MethodDispatcher. And so on with the Volume and NowPlaying classes.

Here is an example of one of the REST handlers:

    import cherrypy
    from jbox.core import volume

    class Volume(object):
        def __init__(self, config):
            self.exposed = True
            self.vol = volume.Volume(config)

        @cherrypy.tools.json_out()
        def GET(self):
            return {'level': self.vol.level()}

        @cherrypy.tools.json_in()
        def PUT(self):
            json = cherrypy.request.json
            self.vol.set_level(json['level'])

I just love the simplicity of it.

There is, however, one darkside here that I am not terribly happy about. It may turn out that I am missing something, but as far as their documentation is concerned, this is how you send and receive JSON data. I find the implementation a little problematic. Without JSON, the methods would look like this:

    def GET(self):
        return self.vol.level()
        # or maybe still 
        # return {'level': self.vol.level()}

    def PUT(self, level):
        self.vol.set_level(level)

Here the PUT parameters are pulled out of the request and passed into the method. As soon as we want JSON we have to remove the parameter, mark up the method as sending/receiving JSON and then manually pull the JSON out of the request. But what if we want XML?

What I see lacking from the framework/api is Content Negotiation. That is, if the client wants JSON, it should ask for it. And get it. If it wants XML, that should be supplied too. Hard-coding my api to JSON is fine since I am the only client, but in general resources should be separate from representations. /api/volume is the resource, JSON is the representation. Ideally, it would be best to allow multiple request and response types with the same resource implementation.

Overall I am very pleased with CherryPy and am looking forward to completing my project with it. The final step is throwing in a little web socket IO with ws4py, which I may blog about down the road.

Wednesday, April 15, 2015

Deploying dacpacs with Octopus Deploy

Introduction

Microsoft’s Data-tier Application Packages (dacpacs) provide a nice mechanism for deploying your database schema and keeping your environments in sync. SSDT provides the ability to publish SQL Server projects via Visual Studio; however, in a professional setting you will want an automated and repeatable way to deploy your dacpacs. My tool of choice is Octopus Deploy. I will show you how to ingrate the two.

The SSDT Project

SSDT Sql Server projects present a bit of an issue when it comes to adding functionality to them: you can’t install NuGet packages.

This affects us in two way. The first part is that in my organization, we like to package our Octopus Deploy.ps1 and configuration transforms into a NuGet so that we can easily add them to our various projects, which we can’t do with database projects. The second issue is that you can’t install the OctoPack NuGet.

OctoPack Error

Deployment Script

To work around the first issue you must manually add a Deploy.ps1 file to the root of the project.

Adding a Deployment Script

To script the deployment of a dacpac we are going to call out to SqlPackage.exe. Here is the template that I use:

$TargetConnectionString = $OctopusParameters["ConnectionString"]
$smartDefaults = $OctopusParameters["GenerateSmartDefaults"]
$dataLoss = $OctopusParameters["BlockOnPossibleDataLoss"]

if (!$smartDefaults) {
    $smartDefaults = "False"
}

if (!$dataLoss) {
    $dataLoss = "True"
}

if (!$TargetConnectionString) {
    throw "Target connection parameter missing!"    
}

Write-Host "Deploying database now..." -NoNewline

New-Alias -Name sqlpackage -Value "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe"
sqlpackage /Action:Publish /SourceFile:"[[DACPAC_NAME]].dacpac" /tcs:$TargetConnectionString `
    /p:GenerateSmartDefaults=$smartDefaults /p:BlockOnPossibleDataLoss=$dataLoss

Write-Host "deployed"

To populate the script, you have to replace [[DACPAC_NAME]] with the name of your dacpac file and ensure the path to sqlpackage.exe is correct for the server you are deploying to.

The first three lines retrieve variables from Octopus. The first is the connection string. The next two are some common parameters that you might find helpful. Please refer to the SSDT documentation to read more about the BlockOnPossibleDataLoss and GenerateSmartDefaults options. If they are not specified, the script will default to safe values (the SSDT defaults).

Passing Additional Variables

Sometimes you need additional variables in your dacpac project. For example, suppose that your database scripts reference another database. If that database has a different name in various environments, you will need to vary this per install (note: I don’t recommend ever doing this, but sometimes that is just how things are). Because the database name changes, you need some way to pass it in from Octopus.

For example purposes let’s assume that our database is called OtherDatabase_dev in our development enviornment and OtherDatabase_prod in our production environment. Our main database contains a script that needs to reference that database, for example:

CREATE PROCEDURE [dbo].[Procedure1]
AS
    SELECT * FROM [OtherDatabse_dev].[dbo].[Customers]
RETURN 0

The first thing that we need to do change the database name to a variable. SQLCMD variables use the syntax $(Name).

CREATE PROCEDURE [dbo].[Procedure1]
AS
    SELECT * FROM [$(OtherDatabse)].[dbo].[Customer]
RETURN 0

Now if you compile your dacpac, you will get a warning that Warning 1 SQL71562: Procedure: [dbo].[Procedure1] has an unresolved reference to object [$(OtherDatabase)].[dbo].[Customer]. You can either ignore this error, or add a reference to the database.

Add a Reference

There are two ways to provide a reference to your “OtherDatabase”. If it is also part, or can be part of your solution, you can add a reference to the other project in the solution:

Add Database Reference

Once you click, the next dialog will show up:

Adding Database Reference

Here you want to select Database projects in the current solution and point it to OtherDatabase. You also want to specify where the database will be located in your environment. Your options are:

  1. Same Database
  2. Different Database, same server
  3. Different Database, different server

Here I will assume we are using the second option. To be honest, I don’t know what affect the Database name field has. It will default to the name of your project. For the Database variable field enter the variable name you wish to use.

Now if you compile solution, the warning will disappear.

The alternate way to add a database reference is to refer directly to another dacpac. I have used this when the two database projects are in different solutions. You need to do the following:

  1. Build the other database to produce a dacpac
  2. Copy the dacpac into your source tree (and add to source control).
  3. Reference the dacpac.

To reference the dacpac, follow the same steps as before except this time you want to select Data-tier Application (.dacpac) from the list of options. Browse to the dacpac file. The other options remain the same.

Dacpac Referece

Again, compiling the solution will see the warning eliminated.

Adding the Variable

Now we can actually add the variable that will get populated from Octopus. Right-click the project and select Properties. From the properties dialog, click on the SQLCMD Variables tab. In the table of variables, add your variable name:

Add Variable

You can also add a default value. I recommend putting your development value in there.

Lastly, we need to augment the deployment script to retreive the value from Octopus and pass it along to the dacpac. To the top of the retrieve the value with something similar to:

$otherDatabase = $OctopusParameters["OtherDatabase.Name"]

At the end of the sqlpackage invocation pass in the variable:

/v:OtherDatabase="$otherDatabase"

The above is SQLCMD variable syntax. You can pass as many of these to SqlPacakge as you like. The syntax is /v:Name=Value

Packaging for Octopus

Since we can’t add OctoPack to the project, we have to get creative here. How to implement the package is heavily influenced by how you will be automating your build. You might be using TFS, TeamCity, psake or something else. Essentially, you just need to get the dacpac file and the Deploy.ps1 file into a NuGet.

For example, if you were using psake, you might put octo.exe into your projects folder and call it directly:

.\octo.exe pack –id=Database –basePath=bin\Debug –version=1.0.0.0

In our environment, we use TFS Build. We run “Custom After Microsoft Common Targets” extensions as I have described previously.

With our common targets, we override the Dacpac’s version, copy the dacpac and dlls to a staging folder and then call the CreateOctoPackPackage task with our version and the location of the staging folder.

If you are interested in additional details of how we implement the custom build, please leave a comment.

Deploying with Octopus

Finally, we are ready to configure Octopus Deploy to deploy the database. If you don’t have a Project in Octopus, create one now.

Open your project and click on the Process tab:

Add step

Click Add step and select Deploy a NuGet package from the list.

In the Step details add the Step name, Machine roles and NuGet package ID. In my case, the NuGet package ID is always the .NET project name and I label the step with the same value. The machine role is also typically DatabaseServer but will, of course, be specific to your environment.

Step Details

Now click on the variables tab. There are to variables you must enter:

  1. ConnectionString
  2. OtherDatabase.Name

You can also optionally add BlockOnPossibleDataLoss and GenerateSmartDefaults.

Adding Variables

Conclusion

Dacpacs provide a pretty slick way to deploy a database schema and keep your environments in sync (although they are not without their issues, which I will write about at a later date). Deploying them with Octopus Deploy is straightforward and only requires a basic PowerShell script. Many advanced features are available SQLCMD variables and SqlPackage parameters, all of which can be controlled from the Octopus interface.