Automated Deployments using Visual Studio

Misc / SQL Server / SSAS / SSIS / SSRS / Testing / Visual Studio

Automated Deployments using Visual Studio

In my earlier post "Automated Testing for the Data Warehouse", I sketched the outlines of what would be needed in order to achieve automated testing for your Data Warehouse solutions. Today, I want to look at the first step: build & deploy. Between the previous post and the current one, some useful content about this has been written already by Jens Vestergaard - he even uses VSTS to do his builds, something I still have to look into. Meanwhile, here's my method of acquiring the latest sources & building them using Visual Studio.

Getting the sources

In order to check out data from TFS, you'll need the TF tools. There are generally three ways to achieve this:

  1. Copy tf.exe (and other required files) manually to your build server
  2. Use Team Explorer Everywhere (requires Java)
  3. Install Visual Studio on your build server

The first option seems not very supported to me (but might be okay for a PoC). The second requires to install Java on your server, so I chose the third option and installed Visual Studio on my build server. This not only gave me tf.exe, but also the possibilities to build all project types I want, regardless whether it's SSIS, SSRS, SSAS or SQL Server.

After installing Visual Studio, tf.exe can be used like this:

"%programfiles(x86)%\Microsoft Visual Studio 14.0\Common7\IDE\TF.exe" get

For a full list of supported TF commands, see MSDN and the Visual Studio docs.

Building and deploying

The default go-to for automated builds is msbuild. Although that's a great tool, it can be a little pain to set up all BI project types. It can be done, but with VS already installed I found it way easier to just use[ref]Mind you, you shouldn't use devenv.exe here - that opens the GUI. is the CLI for operating Visual Studio, which is what we want for building the solution.[/ref]. In order to rebuild the entire solution, you need the following command:

"%programfiles(x86)%\Microsoft Visual Studio 14.0\Common7\IDE\" "D:\Path\To\Your\Solution.sln" /rebuild BuildConfigurationName /out LogFileName

\Path\To\Your\Solution is the path to your solution, BuildConfigurationName is the name of the build configuration you want to use, and LogFileName is the path to the logfile to store results. In my setup, I embedded this command in a .cmd file and replaced LogFileName  with %1 . This enabled me to call on the file from within a workflow engine (like SSIS) and write the logfile dynamically in a convenient way.

After the build succeeds the primary concern is how to install the projects automatically on a server. Because of dependencies we'll need to record somewhere what the order is in which you can install your solution. This is a good thing: recording the installation order enables you to accelerate your deployment process even more, because you can automate it (heck, that's what we're doing here in the first place :-)). The main problem is installing the databases in the right order - apart from the databases the install order is pretty easy:

  1. Set up databases
  2. Install cubes / tabular models (always depend on databases)
  3. Install ETL[ref]ETL projects can depend on each other as well, but the references cannot be hard-wired, so you won't run into problem when not adhering to a dependency graph here[/ref]
  4. Install reports

Step 3 and 4 have no dependency, so they could as well be executed in parallel.

I prefer to model the dependencies inside Visual Studio - in that way the developers adding new projects are responsible for updating the dependencies. The build order is included in the solution file, as well as the paths to all projects:

Project("{159641D6-6404-4A2A-AE62-294DE0FE8301}") = "MyETLProject", "MyPath\MyProjFile.dtproj", "{7166DE16-193E-4803-BAA3-DBF20145549E}"
	ProjectSection(ProjectDependencies) = postProject
		{99A33B2E-0BA9-44B5-967F-D853B10A08C8} = {99A33B2E-0BA9-44B5-967F-D853B10A08C8}
		{99138638-9BE7-4213-B653-CAB500B52446} = {99138638-9BE7-4213-B653-CAB500B52446}
		{0A3F727C-C751-4987-B60A-D39BF23725EB} = {0A3F727C-C751-4987-B60A-D39BF23725EB}
		{2D8BF485-342A-4A92-B695-309B2F394361} = {2D8BF485-342A-4A92-B695-309B2F394361}

When you access the project files in turn, you can easily find the location of the build results:


As you can see, up to this point you don't need any external metadata in order to do automated builds.

Now that we know the deployment order, we can execute the actual deployment. Split out per project type, here's the way to do that from the command line:


In order to deploy databases, use sqlpackage.exe:

"%programfiles(x86)%\Microsoft SQL Server\130\DAC\bin\SqlPackage.exe" /Action:Publish  /p:ScriptDatabaseOptions=False /p:BlockOnPossibleDataLoss=False /SourceFile:".\DacPacSource.dacpac" /TargetServerName:"ServerName" /TargetDatabaseName:"TargetDatabaseToDeploy" /Variables:DatabaseReference="OtherDatabaseName" /Variables:DatabaseReference2="AnotherDatabaseName"

Database references are added as variables in your SQL Server database projects, so you'll need to provide values for them when deploying.


SSIS can be deployed using isdeploymentwizard.exe. Using the /s switch will make it run in silent mode - i.e. without the wizard

IsDeploymentWizard /S /SP:".\YourIspac.ispac" /DS:"ServerName" /DP:"/SSISDB/Folder/ProjectName"

SSAS (Tabular Model)

Compared to other project types, SSAS has quite a few configuration options. Once figured out it is actually pretty easy though: You basically need three configuration files. For a project named "ProjectName" they'll look like this:

  • ProjectName.configsettings
    • Contains data source information: impersonation and connection string
  • ProjectName.deploymentoptions
    • Deployment options: transactional deployment? How to handle partitions and roles? What type of processing is required after installing?
  • ProjectName.deploymenttargets
    • Deployment targets. What server to install on? What will the database name be?

Apart from these three you need '.asdatabase' file here of course - .asdatabase is the only file you need from the Visual Studio build. All other configuration you generally don't want to copy from the build, but keep deployment-specific.

With these four files (three mentioned above + ProjectName.asdatabase) in one folder, you call the Analysis Services deployment wizard:

"%programfiles(x86)%\Microsoft SQL Server\130\Tools\Binn\ManagementStudio\Microsoft.AnalysisServices.Deployment.exe" \Path\To\ProjectName.asdatabase /s:\Path\To\Logfile


There are numerous ways of deploying your reports automatically. I'll just mention the one I'm using currently, which is using a .rss-file published a long time ago by John Desch. Using this file I can deploy my reports using the following command:

"%programfiles(x86)%\Microsoft SQL Server\130\Tools\Binn\rs.exe" -i \Path\To\RssFile.rss -s ServerName/SSRSInstance -v ReportFolder="ReportFolder" -v DataSourceFolder="DataSourceFolder" -v DataSetFolder="DataSetFolder" -v DataSourcePath="/DataSourcePath" -v filePath="\Path\To\ReportFolder" -eMgmt2010

Conclusion: building blocks of automated deployment

As you can see, the various "building blocks" of automated deployment are not that hard, and all Microsoft BI project types can be deployed from the commandline. The key thing to create now is a script which can read dependencies from the solution file, and creates a batch deployment process. The easiest and most powerful way to do that IMHO is to create a BIML-script reading and describing the dependencies. Then you'll end up with an SSIS package handling the dependencies and being able of parallel execution (instead of a batch file deploying linearly). Keeping in mind that you'll need Mist in order to automate the generation of SSIS packages from BIML, I might end up using a PowerShell-script as well... As soon as my PoSh-script is ready for release, it'll appear here first!