Our team had some lofty goals and met a vast majority of them . In the end, my personal goal for dbatools 1.0 was to have a tool that is not only useful and fun to use but trusted and stable as well. Mission accomplished: over the years, hundreds of thousands of people have used dbatools and dbatools is even recommended by Microsoft.
Before we get started with what’s new, let’s take a look at some history.
dbatools began in July of 2014 when I was tasked with migrating a SQL Server instance that supported SharePoint. No way did I want to do that by hand! Since then, the module has grown into a full-fledged data platform solution.
Thanks so much to every single person who has volunteered any time to dbatools. You’ve helped change the SQL Server landscape.
We’ve made a ton of enhancements that we haven’t had time to share even over the past six months. Here are a few.
Availability Group support has been solidified and is looking good and New-DbaAvailabilityGroup is better than ever. Try out the changes and let us know how you like them.
Get-Help New-DbaAvailabilityGroup -Examples
We now also support all the different ways to login to SQL Server! So basically this:
Want to try it for yourself? Here are a few examples.
# AAD Integrated Auth
Connect-DbaInstance -SqlInstance psdbatools.database.windows.net -Database dbatools
# AAD Username and Pass
Connect-DbaInstance -SqlInstance psdbatools.database.windows.net -SqlCredential username@acme.onmicrosoft.com -Database dbatools
# Managed Identity in Azure VM w/ older versions of .NET
Connect-DbaInstance -SqlInstance psdbatools.database.windows.net -Database abc -SqCredential appid -Tenant tenantguidorname
# Managed Identity in Azure VM w/ newer versions of .NET (way faster!)
Connect-DbaInstance -SqlInstance psdbatools.database.windows.net -Database abc -AuthenticationType 'AD Universal with MFA Support'
You can also find a couple more within the MFA Pull Request on GitHub and by using Get-Help Connect-DbaInstance -Examples
.
This is probably my favorite! We now support Local Server Groups and Azure Data Studio groups. Supporting Local Server Groups means that it’s now a whole lot easier to manage servers that don’t use Windows Authentication.
Here’s how you can add a local docker instance.
# First add it with your credentials
Connect-DbaInstance -SqlInstance 'dockersql1,14333' -SqlCredential sqladmin | Add-DbaRegServer -Name mydocker
# Then just use it for all of your other commands.
Get-DbaRegisteredServer -Name mydocker | Get-DbaDatabase
Totally dreamy
Import-DbaCsv is now far more reliable. While the previous implementation was faster, it didn’t work a lot of the time. The new command should suit your needs well.
Get-ChildItem C:\allmycsvs | Import-DbaCsv -SqlInstance sql2017 -Database tempdb -AutoCreateTable
In the past couple months, we’ve started focusing a bit more on Azure: both Azure SQL Database and Managed Instances. In particular, we now support migrations to Azure Managed Instances! We’ve also added a couple more commands to PowerShell Core., in particular, the Masking and Data Generation commands. Over 75% of our commands run on mac OS and Linux!
Still, we support PowerShell 3 and Windows 7 and SQL Server 2000 when we can. Our final testing routines included ensuring support for:
AllSigned
) Execution PolicyWe’ve also added a bunch of new commands, mostly revolving around Roles, PII, Masking, Data Generation and even ADS notebooks!
Want to see the full list? Check out our freshly updated Command Index page .
A few configuration enhancements have been made and a blog post for our configuration system is long overdue. But one of the most useful, I think, is that you can now control the client name. This is the name that shows up in logs, in Profiler and in Xevents.
# Set it
Set-DbatoolsConfig -FullName sql.connection.clientname -Value "my custom module built on top of dbatools" -Register
# Double check it
Get-DbatoolsConfig -FullName sql.connection.clientname | Select Value, Description
The -Register
parameter is basically a shortcut for piping to Register-DbatoolsConfig
. This writes the value to the registry, otherwise, it’ll be effective only for your current session.
Another configuration enhancement helps with standardization. Now, all export commands will default to Documents\DbatoolsExport. You can change it by issuing the following commands.
# Set it
Set-DbatoolsConfig -FullName path.dbatoolsexport -Value "C:\temp\exports" -Register
# Double check it
Get-DbatoolsConfig -FullName path.dbatoolsexport | Select Value, Description
Something new that I like because it’s “proper” PowerShell: we’re now publishing our module with Help separated into its own file. We’re using a super cool module called HelpOut. HelpOut was created for dbatools by a former member of the PowerShell team, James Brundage.
HelpOut allows our developers to keep writing Help within the functions themselves, then separates the Help into dbatools-help.xml and the commands into allcommands.ps1, which helps with faster loading. Here’s how we do it:
Install-Maml -FunctionRoot functions, internal\functions -Module dbatools -Compact -NoVersion
It’s as simple as that! This does all of the heavy lifting: making the maml file and placing it in the proper location, and parsing the functions for allcommands.ps1!
Help will continue to be published to docs.dbatools.io and updated with each release. You can read more about HelpOut on GitHub.
We’ve got a number of breaking changes included in 1.0.
Before diving into this section, I want to emphasize that we have a command to handle a large majority of the renames! Invoke-DbatoolsRenameHelper will parse your scripts and replace script names and some parameters for you.
Renames in the past 30 days were mostly changing Instance
to Server
. But we also made some command names more accurate:
Test-DbaDbVirtualLogFile -> Measure-DbaDbVirtualLogFile
Uninstall-DbaWatchUpdate -> Uninstall-DbatoolsWatchUpdate
Watch-DbaUpdate -> Watch-DbatoolsUpdate
Export-DbaAvailabilityGroup has been removed entirely. The same functionality can now be found using Get-DbaAvailabiltyGroup | Export-DbaScript
.
All but 5 command aliases have been removed. Here are the ones that are still around:
Get-DbaRegisteredServer -> Get-DbaRegServer
Attach-DbaDatabase -> Mount-DbaDatabsae
Detach-DbaDatabase – Dismount-DbaDatabase
Start-SqlMigration -> Start-DbaMigration
Write-DbaDataTable -> Write-DbaDbTableData
I kept Start-SqlMigration
because that’s where it all started, and the rest are easier to remember.
Also, all ServerInstance
and SqlServer
aliases have been removed. You must now use SqlInstance
. For a full list of what Invoke-DbatoolsRenameHelper renames/replaces, check out the source code.
Most of the commands now follow the following practices we’ve observed in Microsoft’s PowerShell modules.
-InputObject
and not DatabaseCollection or LoginCollection, etc.-Path
and not BackupLocation or FileLocation-FilePath
, and not RemoteFile or BackupFileName-SyncOnly
is no longer an option in Copy-DbaLogin. Please use Sync-DbaLoginPermission instead.
-CheckForSql
is no longer an option in Get-DbaDiskSpace. Perhaps the functionality can be made into a new command which can be piped into Get-DbaDiskSpace but the implementation we had was .
For a full list of breaking changes, you can browse our gorgeous changelog, maintained by Andy Levy.
In case you did not hear the news, Rob Sewell and I, are currently in the process of writing dbatools in a Months of Lunches! We’ve really excited and hope to have a MEAP (Manning Early Access Program) available sometime in July. We will keep everyone updated here and on Twitter.
The above is what the editor looks like – a lot like markdown!
If you’d like to see what the writing process is like, I did a livestream a couple of months back while writing Chapter 6, which is about Find-DbaInstance. Sorry about the music being a bit loud, that has been fixed in future streams which can be found at youtube.com/dbatools.
Since Microsoft acquired GitHub, they’ve been rolling out some really incredible features. One such feature is Developer Sponsorships, which allows you to sponsor developers with cash subscriptions. It’s sorta like Patreon where you can pay monthly sponsorships with different tiers. If you or your company has benefitted from dbatools, consider sponsoring one or more of our developers.
Currently, GitHub has approved four of our team members to be sponsored including me, Shawn Melton, Stuart Moore and Sander Stad.
We’ve invited other dbatools developers to sign up as well
Oh, and for the first year, GitHub will match sponsorship funds! So giving to us now is like giving double.
I’d like to give an extra special thanks to the contributors who helped get dbatools across the finish line these past couple months: Simone Bizzotto, Joshua Corrick, Patrick Flynn, Sander Stad, Cláudio Silva, Shawn Melton, Garry Bargsley, Andy Levy, George Palacios, Friedrich Weinmann, Jess Pomfret, Gareth N, Ben Miller, Shawn Tunney, Stuart Moore, Mike Petrak, Bob Pusateri, Brian Scholer, John G “Shoe” Hohengarten, Kirill Kravtsov, James Brundage, Hüseyin Demir, Gianluca Sartori and Rob Sewell.
Without you all, 1.0 would be delayed for another 5 years.
Want to know more about dbatools? Check out some of these posts
dbatools 1.0 – the tools to break down the barriers – Shane O’Neill
dbatools 1.0 is here and why you should care – Ben Miller
dbatools 1.0 and beyond – Joshua Corrick
dbatools 1.0 – Dusty R
Your DBA Toolbox Just Got a Refresh – dbatools v1.0 is Officially Available!!! – Garry Bargsley
dbatools v1.0? It’s available – Check it out!
updating sql server instances using dbatools 1.0 – Gareth N
We’re premiering dbatools 1.0 at DataGrillen in Lingen, Germany today and will be livestreaming on Twitch.
Thank you, everyone, for your support along the way. We all hope you enjoy dbatools 1.0
,
Chrissy
This month’s T-SQL Tuesday, hosted by dbatools Major Contributor, Garry Bargsley ([b]|[t]), is all about automation.
In his invitation, Garry asks “what automation are you proud of completing?” My answer is that I finally created a couple dbatools images and made them available on Docker Hub.
Docker Hub is a cloud-based repository in which Docker users and partners create, test, store and distribute container images.
I’ve long wanted to do this to help dbatools users easily create a non-production environment to test commands and safely explore our toolset. I finally made it a priority because I needed to ensure some Availability Group commands I was creating worked on Docker, too, and having some clean images permanently available was required. Also, in general, Docker is a just a good thing to know for both automation and career opportunities
First, install Docker.
Then grab two images from the dbatools repo. Note that these are Linux images.
# get the base images
docker pull dbatools/sqlinstance
docker pull dbatools/sqlinstance2
The first image will take a bit to download, but the second one will be faster because it’s based on the first! Neat.
The first instance is stacked with a bunch of objects, and the second one has a few basics to enable Availability Groups. Both dbatools images are based off of Microsoft’s SQL Server 2017 docker image.
I also added the following to make test migrations more interesting and Availability Groups possible:
Here’s a visible sampling:
Nice and familiar! You may also notice that sa is disabled. Within the image, I disabled the sa account and created another account with sysadmin called sqladmin. The password, as noted below, is dbatools.IO
To setup the containers, just copy and paste the commands below. The first one sets up a shared network and the second one sets up the SQL Servers and exposes the required database engine and endpoint ports. It also names them dockersql1 and dockersql2 and gives them a hostname with the same name. I left in “docker” so that it doesn’t conflict with any potential servers named sql1 on the network.
# create a shared network
docker network create localnet
# setup two containers and expose ports
docker run -p 1433:1433 -p 5022:5022 --network localnet --hostname dockersql1 --name dockersql1 -d dbatools/sqlinstance
docker run -p 14333:1433 -p 5023:5023 --network localnet --hostname dockersql2 --name dockersql2 -d dbatools/sqlinstance2
Generally, you don’t have to map the ports to exactly what they are running locally, but Availability Groups do a bit of port detection that require one-to-one mapping.
By the way, if you sometimes prefer a GUI like I do, check out Kitematic. It’s not ultra-useful but it’ll do.
Now we are setup to test commands against your two containers! You can login via SQL Server Management Studio or Azure Data Studio if you’d like to take a look first. The server name is localhost (or localhost,14333 for the second instance), the username is sqladmin and the password is dbatools.IO
Note that Windows-based commands (and commands relating to SQL Configuration Manager) will not work because the image is based on SQL Server for Linux. If you’d like to test Windows-based commands such as Get-DbaDiskSpace, consider testing them on localhost if you’re running Windows.
Next, we’ll setup a sample availability groups. Note that since it’s referring to “localhost”, you’ll want to execute this on the computer running Docker. If you’d like to run Docker on one machine and execute the code on another machine, that is possible but out of scope for this post.
# the password is dbatools.IO
$cred = Get-Credential -UserName sqladmin
# setup a powershell splat
$params = @{
Primary = "localhost"
PrimarySqlCredential = $cred
Secondary = "localhost:14333"
SecondarySqlCredential = $cred
Name = "test-ag"
Database = "pubs"
ClusterType = "None"
SeedingMode = "Automatic"
FailoverMode = "Manual"
Confirm = $false
}
# execute the command
New-DbaAvailabilityGroup @params
Beautiful !
Again, from the machine running the Docker containers, run the code below. You may note that linked servers, credentials and central management server are excluded from the export. This is because they aren’t currently supported for various Windows-centric reasons.
# the password is dbatools.IO
$cred = Get-Credential -UserName sqladmin
# First, backup the databases because backup/restore t-sql is what's exported
Backup-DbaDatabase -SqlInstance localhost:1433 -SqlCredential $cred -BackupDirectory /tmp
# Next, perform export (not currently supported on Core)
Export-DbaInstance -SqlInstance localhost:1433 -SqlCredential $cred -Exclude LinkedServers, Credentials, CentralManagementServer, BackupDevices
Whaaaat! Now imagine doing this for all of your servers in your entire estate. Want to know more? Check out simplifying disaster recovery using dbatools which covers this topic in-depth.
This command requires a shared directory. Check out Shared Drives and Configuring Docker for Windows Volumes for more information. You may notice that this command does not support linked servers, credentials, central management server or backup devices.
# the password is dbatools.IO
$cred = Get-Credential -UserName sqladmin
# perform the migration from one container to another
Start-DbaMigration -Source localhost:1433 -Destination localhost:14333 -SourceSqlCredential $cred -DestinationSqlCredential $cred -BackupRestore -SharedPath /sharedpath -Exclude LinkedServers, Credentials, CentralManagementServer, BackupDevices -Force
To stop and remove a container (and start over if you’d like! I do tons of times per day), run the following commands or use Kitematic’s GUI. This does not delete the actual images, just their resulting containers.
docker stop dockersql1 dockersql2
docker rm dockersql1 dockersql2
If you’d like to know more, the posts below are fantastic resources.
If you’d like to understand how containers work with the CI/CD process, check out this video by Eric Kang, Senior Product Manager for SQL Server.
Thanks for reading! Sorry about any typos or mistakes, I hastily wrote this while traveling back from vacation; I had to make Garry’s T-SQL Tuesday!
- Chrissy
]]>By default, only Google, docs and dbatools are enabled but you can configure whichever providers you’d like in VS Code Settings.
At first, I was just messing around to see what it took to create a VS Code extension, but then I realized that I was actually using it and decided to share. If you use VS Code, give it a shot and let me know if you’d like any features or additional search providers!
- Chrissy
P.S. Can’t see the gif? Watch the video on YouTube.
]]>Aliases have been added for the changes, so these are not breaking changes:
SecureString
data type but this makes that clear. Reset-DbaAdmin actually has output now! This was a big oversight for an otherwise incredibly useful and cool command.
I also added a -SqlCredential
parameter to make passing secure passwords easier while still staying secure.
The Credential parameter in Connect-DbaInstance has been changed to SqlCredential and an alias to Credential has been added. Also, we fixed trusted domain support in both our internal and external Connect commands. So if you’ve ever had a problem with that before, it should work now.
Aliases have not been created for commands using these parameters so these are breaking changes.
MB
have been changed to the parameter or column name without MB. For instance SizeMB -> Size. Corresponding documentation and examples have been updated as well.Invoke-DbatoolsRenameHelper has been updated to handle the No to Exclude changes, so don’t forget you can auto-update your scripts. Running it is as simple as:
Get-ChildItem *.ps1 -Recurse | Invoke-DbatoolsRenameHelper
Note that it does not work for the breaking changes released below (they are massive) or any of the Exclude parameters in Start-DbaMigration since the change wouldn’t be a one-to-one and I’m not good with regex.
This command had been rewritten pretty much from the ground up. It no longer requires remote registry access, just SQLWMI and PowerShell Remoting. It also accepts servers from CMS natively instead of relying on the terribly named -CmsServer
parameter. To find out more, check out Get-DbaProductKey
Invoke-DbaDbShrink now has better column names.
I’m so happy to announce that Import-DbaCsvToSql has been renamed to Import-DbaCsv and is now usable and reliable for most CSVs. The original command was ultra fast, but it came at the price of reliability. I found myself never using a fast command because it so rarely handled my imperfect data. That’s been fixed and the command is still pretty darn fast.
Writing more in-depth about this command is on my todo list, but I love that you can pipe in a bunch of CSVs from a directory and smash them into a database. Everything is done within a transaction, too, so if the import fails, the transaction is rolled back and no changes persist.
K that should do it for breaking changes, at least until we’re closer to 1.0.
We’re trying our best to ensure you don’t get too frustrated while we make necessary changes, and we thank you for your patience.
The bug bash is going incredibly well! We’re now down from 90+ open bugs to just 25 minor bugs Huge thanks to everyone who has helped bring our count down to such a manageable number.
- Chrissy
]]>Now, you can migrate from one server to many. This applies to both Start-DbaMigration
and all of the Copy-Dba*
commands, including Copy-DbaDatabase and Copy-DbaLogin.
As you may be able to see in the title bar of this Out-GridView, I am migrating from workstation, which is a SQL Server 2008 instance, to localhost\sql2016 and localhost\sql2017. My entire command is as follows:
Start-DbaMigration -Source workstation -Destination localhost\sql2016, localhost\sql2017 -BackupRestore -UseLastBackup | Out-GridView
If you examine the results, you’ll see it migrated sp_configure, then moved on to credentials – first for localhost\sql2016 then localhost\sql2017. And continued on from there, migrating central management server, database mail, server triggers and databases.
It migrates a bunch of other things too, of course. Haven’t seen or performed a migration before? Check out this 50-second video of a migration.
This one is awesome. Now, you can use your existing backups to perform a migration! Such a huge time saver. Imagine the following scenario:
Dear DBA,
Your mission, should you choose to accept it, is to migrate the SQL Server instance for a small SharePoint farm.
You’ll work with the SharePoint team and will have four hours to accomplish your portion of the migration.
All combined, the databases are about 250 GB in size. Good luck, colleague!
A four hour outage window seems reasonable to me! Here is one way I could accomplish this task:
I’ve actually done this a number of times, and even made a video about it a couple years ago.
The outdated output is drastically different from today’s output but the overall approach still applies. Note that back then, -UseLastBackup
did not yet exist so in the video, it performs a backup and restore, not just a restore. I should make a new video.. one day!
So basically, this is the code I’d execute to migrate from spcluster to newcluster:
# first, modify the alias then shut down the SharePoint servers $spservers = "spweb1", "spweb2", "spapp1", "spapp2" Remove-DbaClientAlias -ComputerName $spservers -Alias spsql1 New-DbaClientAlias -ComputerName $spservers -ServerName newcluster -Alias spsql1 Stop-Computer -ComputerName $spservers # Once each of the servers were shut down, I'd begin my SQL Server migration. Start-DbaMigration -Source spcluster -Destination newcluster -BackupRestore -UseLastBackup | Out-GridView # Mission accomplished
This would migrate all of my SQL Logins, with their passwords & SIDs, etc, jobs, linked servers, all that, then it’d build my last full, diff and log backups chain, and perform the restore to newcluster! Down. To. My. Last. Log. So awesome
Thanks so much for that functionality Stuart, Oleg and Simone!
For very large database migrations, we currently offer log shipping. Sander Stad made some cool enhancements to Invoke-DbaDbLogShipping
and Invoke-DbaDbLogShipRecovery
recently, too.
# Also supports multiple destinations! # Oh, and has a ton of params, so use a PowerShell splat $params = @{ Source = 'sql2008' Destination = 'sql2016', 'sql2017' Database = 'shipped' BackupNetworkPath= '\\backups\sql' PrimaryMonitorServer = 'sql2012' SecondaryMonitorServer = 'sql2012' BackupScheduleFrequencyType = 'Daily' BackupScheduleFrequencyInterval = 1 CompressBackup = $true CopyScheduleFrequencyType = 'Daily' CopyScheduleFrequencyInterval = 1 GenerateFullBackup = $true Force = $true } # pass the splat Invoke-DbaDbLogShipping @params # And now, failover to secondary Invoke-DbaDbLogShipRecovery -SqlInstance localhost\sql2017 -Database shipped
Other migration options such as Classic Mirroring and Availability Groups are still on the agenda.
Happy migrating!
- Chrissy
]]>PSDatabaseClone was created by Sander Stad.
PSDatabaseClone is a PowerShell module for creating SQL Server database images and clones. It enables administrator to supply environments with database copies that are a fraction of the original size.
It is well-documented and open-source.
dbops was created by Kirill Kravtsov.
dbops is a Powershell module that provides Continuous Integration/Continuous Deployment capabilities for SQL database deployments.
It is based on DbUp, which is DbUp is an open source .NET library that helps you to deploy changes to SQL Server databases. dbops currently supports both SQL Server and Oracle.
sqlwatch was created by Marcin Gminski.
The aim of this this project is to provide a free, repository backed, SQL Server Monitoring.
The project is open-source and the developers are available on Twitter and in #sqlwatch in the SQL Server Community Slack.
PowerUpSQL was created by Scott Sutherland.
PowerUpSQL includes functions that support SQL Server discovery, weak configuration auditing, privilege escalation on scale, and post exploitation actions such as OS command execution.
The project is open-source and was recently featured at Black Hat USA.
If you’re new to dbatools and not familiar with our other projects, dbachecks was created by the dbatools team and is now primarily maintained by Rob Sewell.
dbachecks is a framework created by and for SQL Server pros who need to validate their environments using crowd-sourced checklists.
The project is open-source and totally beautiful.
If you’re wondering what happened to dbareports, Rob handed it off to Jason Squires who is currently in the middle of a rewrite.
If I’ve missed your module or project, let me know in the comments and I’ll happily add it to this post!
Last night, was the second live stream of #PSPowerHour! Check it.
- Chrissy
]]>Created by Michael T Lombardi and Warren F, PSPowerHour is “like a virtual User Group, with a lightning-demo format, and room for non-PowerShell-specific content. Eight community members will give a demo each PowerHour.”
Sessions are proposed and organized on GitHub, which is really cool. Both new and seasoned speakers are invited to propose topics
Everyone did such a great job, I love this livestream! As you can see above, the hour was opened up with my session about Default Parameter Values, which I wrote about earlier. Then Doug Fink talked about his really amazing module, ImportExcel.
Next, dbatools contributor Andrew Wickman talked about setting SQL Server trace flags using PowerShell. This session also features containers! After Andrew’s session, dbatools contributor Jess Pomfret talked about her awesome compression commands which make it really easy to enable compression within SQL Server.
Finally for dbatools content, major contributor Andy Levy talks discusses safely (and properly!) querying SQL Server using Invoke-DbaSqlQuery.
After that Josh King talked about his very impressive and beautiful module BurntToast. And the hour is wrapped up by Daniel Silva who has a fantastic demo where he controls a Raspberry Pi with PowerShell
The next PSPowerHour will happen on August 30. The meeting’s agenda is already posted and features sessions about ChatOps, VS Code, SQL Server, getters & setters, PSKoans, PowerShell 6.0 and converting code using PowerShell.
Be sure to check it out!
- Chrissy
]]>In his day-long session, Rob will talk about a variety of super interesting subjects including: dbachecks, PowerShell module-making, GitHub, VSTS, and dbatools. Rob is a vibrant, knowledgeable speaker and I can’t recommend this precon enough! I learn a ton every time that Rob and I present together.
Professional and Proficient PowerShell: From Writing Scripts to Developing Solutions
DBA’s are seeing the benefit of using PowerShell to automate away the mundane. A modern data professional needs to be able to interact with multiple technologies and learning PowerShell increases your ability to do that and your usefulness to your company.
At the end of this fun filled day with Rob, a former SQL Server DBA turned professional automator, you will be much more confident in being able to approach any task with PowerShell and you will leave with all of the code and demos. You can even follow along if you bring a laptop with an instance of SQL Server installed.
We will have a lot of fun along the way and you will return to work with a lot of ideas, samples and better habits to become a PowerShell ninja and save yourself and your organisation time and effort.
To register, visit the shortlink sqlps.io/precon
- Chrissy
]]>This screenshot reminded me that I should write about our own import time stats.
You may remember years ago when I expressed how upset I was about SQLPS, SqlServer’s predecessor, taking so long to import. Five whole seconds! Thanks to the community response, Microsoft took note and now it’s down to an amazing 00:00:00.9531216!
That’s about 1 second, though often times, I’ve seen it load in 500ms. Congrats to Microsoft! I’m jealous At the PASS Summit PowerShell Panel last year, people asked what we loved most and hated most about PowerShell. I already knew my answer for what I hated most: long import times.
And I immediately copped to my embarrassment that I complained about SQLPS taking so long to load, yet here we were, back in November 2017, taking just as long to load. Now to be fair, we support more SQL Server functionality and we also use non-compiled code (functions/ps1 vs cmdlets/C#) which makes it easier for the community to contribute. This means we can only do so much.
I asked C# wizard, Fred, how we can improve import times, and he immediately jumped on it by adding a class that breaks down how long each section takes.
You can test this yourself by importing dbatools then running:
[Sqlcollaborative.Dbatools.dbaSystem.DebugHost]::ImportTime
What you’ll notice is that on that supafast machine, dbatools is now down to about a 1.78 second load! Incredible. This is how we did it:
We noticed that the longest part of importing the module was importing all the extra SMO DLL’s that we require for many of the commands. We import about 150 DLLs and it looks like that number will only grow as we begin to support more functionality (such as Integration services, etc.)
To address this concern, Fred added multi-threading via runspaces to our import process. Too cool! This resulted in a significant decrease in time.
The other thing we did to significantly decrease import times was we combined all of the individual .ps1 files in functions*.ps1 to a single .ps1 file. So now, before every release, I combine all the newly updated commands, sign it using our code signing certificate, then publish it to the PowerShell Gallery.
It also means that we had to modify our import process to accommodate our developers because nobody, including me, wants to work on a 90,000 line file. We handled this by detecting if a .git folder exists in the dbatools path, and if it does, then it’ll skip allcommands.ps1 and import the individual .ps1 files in the functions directory.
The .git folder only exists when the git repository is cloned. This means it won’t exist in our module in the PowerShell Gallery or in a zip downloaded from GitHub.
Edit: I took this a step further and compressed the ps1 to a zip. Turns out it works super well! Check out the post for more information.
I’ve heard that PowerShell Core (PSv6) is insanely fast. Unfortunately, SMO is not entirely ported to .NET Core, so we can’t yet support 6.0. However! SMO is the only portion of our module that is not 6.0 ready, so once SMO is ported, dbatools will be too . Hopefully this will result in faster load times.
On my Windows 7 test box, dbatools loads in 2.3 seconds. On a more locked down Windows 2012 server, the import takes about 4-6 seconds.
Note that you should never experience import times over 20 seconds. If you do, check your Execution Policy, which could be impacting load times. dbatools is fancy and signed by a code-signing certificate; this is awesome for code integrity, but it’s also known to slow down imports when mixed with certain Execution Policies.
- Chrissy
I found that about 80-90% of logins/applications were covered within 48-hours, but two months of data gave me total confidence.
I always wanted to update the command, though I’m not sure Watch-DbaDbLogin is still within the scope of the module. It’ll likely remove it in dbatools 1.0 so please accept this far cooler post in its place.
There are several ways to capture logins, all with their own pros and cons. In this post, we’ll outline four possibilities: default trace, audits, extended events and session enumeration.
Note: The code in this post requires dbatools version 0.9.323. I found two bugs while testing sample scenarios Also, this post addresses tracking logins for migration purposes, not for security purposes. Edit the where clauses as suitable for your environment.
Using the default trace is pretty lightweight and backwards compatible. While I generally try to avoid traces, I like this method because it doesn’t require remote access, it works on older SQL instances, it’s accurate and reading from the trace isn’t as CPU-intensive as it would be with an Extended Event.
Basically, no matter which way you track your logins, you’ll need to store them somewhere. Below is some T-SQL which sets up a table that is ideal for bulk importing (which we’ll do using Write-DbaDataTable
).
The table is created with an index that ignores duplicate sessions. When IGNORE_DUP_KEY
is ON, a duplicate row is simply ignored. So we’re going to setup a clustered index using SqlInstance, LoginName, HostName, DatabaseName, ApplicationName and StartTime. Then the collector will send a bunch of rows via bulkcopy to the table, and the table will ignore the dupes.
# This creates a "watchlogins" in the "inventory" database | |
$sql = "CREATE TABLE watchlogins ( | |
SqlInstance varchar(128), | |
LoginName varchar(128), | |
HostName varchar(128), | |
DatabaseName varchar(128), | |
ApplicationName varchar(256), | |
StartTime datetime | |
) | |
-- Create Unique Clustered Index with IGNORE_DUPE_KEY=ON to avoid duplicates | |
CREATE UNIQUE CLUSTERED INDEX [ClusteredIndex-Combo] ON watchlogins | |
( | |
SqlInstance ASC, | |
LoginName ASC, | |
HostName ASC, | |
DatabaseName ASC, | |
ApplicationName ASC, | |
StartTime ASC | |
) WITH (IGNORE_DUP_KEY = ON)" | |
# Execute your SQL - in this case, my centralized collection server is localhost | |
Invoke-DbaSqlQuery -SqlInstance localhost -Query "CREATE DATABASE inventory" | |
Invoke-DbaSqlQuery -SqlInstance localhost -Database inventory -Query $sql |
To clarify, “duplicate” logins may show up, but not duplicate sessions. Watch-DbaDbLogin only recorded the first time it ever saw a login/db/host/app combination which many people found to be less useful, especially if you run the login tracker for years. What if a login became stale?
If you’d like the first login only, remove StartTime ASC
from the index.
# Set all of your servers | |
$servers = "sql2014","sql2016","sql2017" | |
# Check to see if default trace is enabled | |
$servers | Get-DbaSpConfigure -ConfigName DefaultTraceEnabled | | |
Where-Object RunningValue -eq $false | Set-DbaSpConfigure -Value $true |
Next, you’ll want to setup a collector as a scheduled SQL Agent Job.
# Exclude noise using T-SQL syntax - customize for your environment | |
$where = "DatabaseName is not NULL | |
and DatabaseName != 'tempdb' | |
and SERVERPROPERTY('MachineName') != HostName | |
and ApplicationName not like 'dbatools%' | |
and ApplicationName not like 'Microsoft SQL Server Management Studio%' | |
and ApplicationName not like '\[%].Net SqlClient Data Provider' ESCAPE '\'" # ignore sharepoint guid stuff | |
# Collect the results into a variable so that the bulk import is supafast | |
$results = $servers | Get-DbaTrace -Id 1 | Read-DbaTraceFile -Where $where | | |
Select-Object SqlInstance, LoginName, HostName, DatabaseName, ApplicationName, StartTime | |
# Bulk import to the centralized database in an efficient manner (piping would write line by line) | |
if ($results) { | |
Write-DbaDataTable -InputObject $results -SqlInstance localhost -Database inventory -Table watchlogins | |
} |
How often should you run the job? It depends. I have one server that has login information going back to November. But I’ve found that SharePoint or System Center dedicated instances only have about 20 minutes worth of login data in the default trace.
How long does the collection take? Polling 15 servers took 14 seconds to read 55,000 records and 18 seconds to write that data. Of the 55,000 records, only 115 were unique!
Audits are cool because audits can “force the instance of SQL Server to shut down, if SQL Server fails to write data to the audit target for any reason”. This ensures that 100% of your logins are captured. But my requirements for collecting migration information aren’t that high and I haven’t found the magical Audit Spec that only logs what I need. Here’s what the .sqlaudit file for SUCCESSFUL_LOGIN_GROUP
looks like when you rename it to .xel and open it.
Eh, I’m missing so much stuff. And since Audits are Extended Events anyway, and I have more control over what I do and don’t want to see, we’ll skip right to Extended Events.
You can also use Extended Events. This option is pretty cool but collecting the data does require UNC access for remote servers.
# Create the table with a special index | |
$sql = "CREATE TABLE watchlogins ( | |
server_instance_name varchar(128), | |
server_principal_name varchar(128), | |
client_hostname varchar(128), | |
[database_name] varchar(128), | |
client_app_name varchar(256), | |
timestamp datetime | |
) | |
-- Create Unique Clustered Index with IGNORE_DUPE_KEY=ON to avoid duplicates | |
CREATE UNIQUE CLUSTERED INDEX [ClusteredIndex-Combo] ON watchlogins | |
( | |
server_instance_name ASC, | |
server_principal_name ASC, | |
client_hostname ASC, | |
[database_name] ASC, | |
client_app_name ASC, | |
timestamp ASC | |
) WITH (IGNORE_DUP_KEY = ON)" | |
# Execute your SQL - in this case, my centralized collection server is localhost | |
Invoke-DbaSqlQuery -SqlInstance localhost -Query "CREATE DATABASE inventory" | |
Invoke-DbaSqlQuery -SqlInstance localhost -Database inventory -Query $sql |
We’ve provided a “Login Tracker” Extended Event session template that you can easily add to your estate.
This template creates a session that:
I chose sql_statement_starting because it’s the only one that I found that actually included the database name. If this doesn’t work for you, you can modify then export/import the modified Session. If you have a better suggestion, I’d love that. Please let me know; I kinda feel like this one is overkill.
# Specify your servers | |
$servers = Get-DbaRegisteredServer -SqlInstance sql2017 | |
# Import the 'Login Name' XESession Template available in dbatools 0.9.320 and above | |
$sessions = Get-DbaXESessionTemplate -Template 'Login Tracker' | Import-DbaXESessionTemplate -SqlInstance $servers | |
# Set each one to auto-start | |
foreach ($session in $sessions) { | |
$session.AutoStart = $true | |
$session.Alter() | |
} |
# Collect the results into a variable so that the bulk import is supafast | |
$results = $servers | Get-DbaXESession -Session 'Login Tracker' | Read-DbaXEFile | | |
Select-Object server_instance_name, server_principal_name, client_hostname, database_name, client_app_name, timestamp | |
# Bulk import to the centralized database in an efficient manner (piping would write line by line) | |
Write-DbaDataTable -InputObject $results -SqlInstance localhost -Database inventory -Table watchlogins |
So instead of placing the burden of XML shredding on the CPU of the destination SQL instance, Read-DbaXEFile
uses the local resources. It does this by using the RemoteTargetFile
which is available in Get-DbaXESession
but is not a default field. To unhide non-default fields, pipe to SELECT *.
Keep in mind that the entire file is read each time you enumerate. Which is not a big deal, but should be considered if you have millions of logins.
Note that I did set a max on the Login Tracker file size to 50 MB so if you want to modify that, you can use PowerShell or SSMS (Instance Management
Extended Events
Sessions
Login Tracker
right-click Properties
Data Storage
Remove/Add). There is no dbatools command available to do this in PowerShell yet, so you’ll have to do it manually until it’s added.
This one requires no setup at all, but only captures whoever is logged in at the time that you run the command. This approach is what I originally used in Watch-DbaDbLogin (scheduled to run every 5 minutes) and it worked quite well.
So if you’ve never seen the output for Get-DbaProcess, which does session enumeration, it’s pretty useful. If you’d like something even more lightweight that still gives you most of the information you need, you can use $server.EnumProcesses()
Actually, scratch all that. Let’s go with some lightweight, backwards-compatible T-SQL that gets us only what we need and nothing more. Honestly, of all the ways, I’ve personally defaulted back to this one. It’s just so succinct and efficient. There is the possibility that I’ll miss a login, but this isn’t a security audit and really, I inventoried 100% of the logins I needed for my last migration.
# This creates a "watchlogins" in the "inventory" database | |
$sql = "CREATE TABLE watchlogins ( | |
SqlInstance varchar(128), | |
Login varchar(128), | |
Host varchar(128), | |
[Database] varchar(128), | |
ApplicationName varchar(256), | |
StartTime datetime | |
) | |
-- Create Unique Clustered Index with IGNORE_DUPE_KEY=ON to avoid duplicates | |
CREATE UNIQUE CLUSTERED INDEX [ClusteredIndex-Combo] ON watchlogins | |
( | |
SqlInstance ASC, | |
Login ASC, | |
Host ASC, | |
[Database] ASC, | |
ApplicationName ASC, | |
StartTime ASC | |
) WITH (IGNORE_DUP_KEY = ON)" | |
# Execute your SQL - in this case, my centralized collection server is localhost | |
Invoke-DbaSqlQuery -SqlInstance localhost -Query "CREATE DATABASE inventory" | |
Invoke-DbaSqlQuery -SqlInstance localhost -Database inventory -Query $sql |
# Specify your servers | |
$servers = "sql2014","sql2012","sql2016","sql2017" | |
# Setup the T-SQL | |
$sql = "SELECT SERVERPROPERTY('ServerName') AS SqlInstance, login_name as [Login], [host_name] as Host, | |
DB_NAME(p.dbid) as [Database], s.[program_name] as Program, s.login_time as LoginTime | |
FROM sys.dm_exec_sessions s inner join sys.sysprocesses p on s.session_id = p.spid | |
WHERE p.dbid is not NULL | |
and DB_NAME(p.dbid) != 'tempdb' | |
and SERVERPROPERTY('MachineName') != [host_name] | |
and s.[program_name] not like 'dbatools%' | |
and s.[program_name] not like 'Microsoft SQL Server Management Studio%' | |
and s.[program_name] not like '\[%].Net SqlClient Data Provider' ESCAPE '\'" # ignore sharepoint guid junk | |
# Collect relevant results | |
foreach ($instance in $servers) { | |
$results = Invoke-DbaSqlQuery -SqlInstance $instance -Query $sql | |
if ($results) { | |
Write-DbaDataTable -InputObject $results -SqlInstance localhost -Database inventory -Table watchlogins | |
} | |
} |
If you’re testing the scripts on a non-busy system like I did, you may not get any results back because we’re ignoring connections from dbatools and SQL Server Management Studio.
If you’d like to ensure some results, just run this before performing a collection. This connects to SQL Server using a fake client name and performs a query that gathers database names.
$server = Connect-DbaInstance -SqlInstance sql2017 -ClientName 'My test client' | |
$server.Databases.Name |
To schedule the collection, you can use my favorite method, SQL Server Agent. I wrote about this in-depth in a post, Scheduling PowerShell Tasks with SQL Agent.
During my own migration, I used session enumeration and setup the collector to run every 5 minutes. With Traces or Extended Events, you can collect the logins far less frequently since they are stored on the remote server.
Hope this was helpful!
Chrissy