Friday, October 4, 2013

Powershell SQL Agent Job Steps and the .Count Property

This has been frustrating me for a few days now and I finally cracked it, more or less.

I have a handful of SQL Agent jobs that run on our Central Management Server Instance. These jobs check certain settings across our environment and email me a report if any servers are out of compliance. I realize I could use Policy Based Management, but I've found this system works well, except for one thing...

The scripts populate an array in Powershell with the offending records. I then check the size of the array and if it has > 0 members, I email the results to an operator on the CMS instance. I realized recently that if the array has 1 member, I would not be notified. Only if there were two or more items in the array would I get an email. I had tested my script locally and it worked fine, the problem only seemed to occur when the script ran under the SQL Agent. Enter Google. Some searching turned up this article which describes how in versions of Powershell prior to v3 (the SQLPS.exe used by the SQL Agent is based on v2 in SQL 2008R2), single element (or empty) arrays are collapsed into a single (non-array) item, which lacks the .Count property. the workaround is to wrap the results as an array type with @(), like IF (@(QueryResults).Count -gt 0). Sure enough, this worked ... in testing on my local machine. Now, when running through the SQL Agent, I receive the emailed results even if the array is empty. Some testing confirmed that when the array is ostensibly empty, the Count property is still returning a value of 1, but only when running through the SQL Agent. WTH.

Having not yet discovered the underlying cause of this issue, I've found a way around it. I first convert the entire array to a string and test its length rather than the number of elements in the array. If the length > 1 then I know I should send the notification email. It's kind of a hack and I don't like it. So, if anyone reading this should know the real reason for the behavior and a better way around it, I'd love to hear it.

Should you find it useful, here is an example of the full script that I use to find databases not owned by 'sa'. You can add exceptions to it, to ignore specific non 'sa' users. This is useful to skip databases for applications like Sharepoint that use database ownership as part of its security strategy.

$ServerGroup = 'Production' #Replace with the name of your CMS Server group to check
$CMServer = 'CMSServerInstance' #Replace with the name of your CMS SQL instance
$Operator = 'DBAs' #Replace with the Operator Group that should be emailed
$FromAddress = 'sql_checkup@yourdomain.com' #Replace with From Address for email being sent
$msg_Subject = "$ServerGroup Database Ownership Report" #Replace with subject line for emailbeing sent
$SmtpServer = 'relay.yourdomain.local' #Replace with name of SMTP Server for email relay

## Fetch List of Servers in the group $ServerGroup from Central Management Server $CMServer ##
$ServerList = Invoke-Sqlcmd -Query "SELECT DISTINCT s.server_name AS 'ServerName'
FROM msdb.dbo.sysmanagement_shared_server_groups_internal g
INNER JOIN msdb.dbo.sysmanagement_shared_registered_servers_internal s
ON g.server_group_id = s.server_group_id
WHERE g.name = '$ServerGroup'
ORDER BY s.server_name ;" -ServerInstance $CMServer

## Build results by executing our query against each server in the list we just built ##
$QueryResults = foreach ($ServerName in $ServerList)
{
Invoke-Sqlcmd -Query "SELECT @@ServerName As ServerName, d.name As DatabaseName, s.name As Owner
       FROM sys.databases d LEFT OUTER JOIN sys.syslogins s on d.owner_sid = s.sid
       WHERE ((s.name <> 'sa' OR s.name IS NULL) AND ISNULL(s.name,'') NOT IN('DOMAIN\excluded_user', 'APP_ExcludedUser')) AND d.source_database_id IS NULL
       ORDER BY @@ServerName, d.name
" -ServerInstance $ServerName.ServerName
}

## If there are any rows in the result set, email it to the Operator specified in $Operator ##
$QueryResultsString = Out-String -InputObject $QueryResults
If ($QueryResultsString.Length -gt 0)
{
  ## Build data set of Email addresses for the operator specified (if more than one address, assumes ';' delimiter) ##
  $email_addresses = Invoke-Sqlcmd -Query "create table #email (address nvarchar(100))
  declare @email_addresses nvarchar(100)
  declare @address nvarchar(100)
  declare @Pos INT
  select @email_addresses = email_address from msdb.dbo.sysoperators where name = '$Operator'
  SET @email_addresses = LTRIM(RTRIM(@email_addresses))+ ';'
  SET @Pos = CHARINDEX(';', @email_addresses, 1)
  WHILE @Pos > 0
      BEGIN
          SET @address = LTRIM(RTRIM(LEFT(@email_addresses, @Pos - 1)))
          IF @address <> ''
              BEGIN
                  INSERT INTO #email (address)
                    VALUES (CAST(@address AS VARCHAR(MAX)))
              END
          SET @email_addresses = RIGHT(@email_addresses, LEN(@email_addresses) - @Pos)
          SET @Pos = CHARINDEX(';', @email_addresses, 1)
      END
  select * from #email
  drop table #email ;" -ServerInstance $CMServer

  ## Convert the data set to a string array for later use to build addressee's ##
  $email_addresses_array = @()
  for ($i=0; $i -le $email_addresses.Length; $i++)
    {
      $email_addresses_array = $email_addresses_array + ($email_addresses[$i].ItemArray -join ",")
    }

  ## Build the Email  and sent it##
  $Body = "The following databases are not owned by sa :`n"
  $Body += $QueryResults | Out-String
  $msg = New-Object System.Net.Mail.MailMessage
  $msg.From = new-object System.Net.Mail.MailAddress($FromAddress)
  foreach ($email_address in $email_addresses_array)
    {
      If ($email_address.Length -gt 0)
        {
          $msg.To.Add("$email_address")
        }
    }
  $msg.Subject = $msg_Subject
  $msg.Body = $Body
  $smtpClient = new-object System.Net.Mail.SmtpClient
  $smtpClient.Host = $SmtpServer
  $smtpClient.Send($msg)
}


Thursday, September 26, 2013

Thoughts on the discontinuation of Microsoft's MCM/MCSM program

It's old news, I know, but Microsoft killed the MCM/MCSM program, announcing it just before the Labor Day holiday weekend. There isn't much I can say that hasn't already been said. Every SQL blogger out there seems to have weighed in already. Among them was a very popular Connect item calling for the program's reinstatement that contained a HUGE outpouring of support for the program and some thoughtful responses from Microsoft. Oddly, that Connect item seems to have disappeared.

Personally, I feel this was a mistake on Microsoft's part. Killing off the one certification program they have that truly means something for what appears to be financial reasons was a short-sighted move. If, on the other-hand, as Tim Sneath would like I to believe, the program was killed for lack of interest and participation, then I believe this was still a mistake. Doing so before having something to take its place, to solve the participation problem, still seems shortsighted. In either case, I hope they reconsider and implement a new program in its place. One that maintains the integrity of the program, and continues to really mean something.

If they had a plan by making the announcement just before a holiday weekend, the plan worked. I was out camping that weekend and backpacking the following week such that I didn't get the news until nearly two weeks after the fact. Initially, I was kinda crushed. I passed the SQL MCM knowledge exam last November at the PASS Summit and took the lab exam in late December. I narrowly failed, my report saying that I received 82% of the required score. I've been preparing most of the year for a retake and had penciled in a late October re-take. Then this news.

I wondered if I should even bother retaking the exam, now, with the program effectively dead on 10/1 or if I should go head and bump up my timetable for a retake and give it one more shot. In the end, I decided to retake the exam, which I did today, having just finished about an hour ago.

Why?

  • In the hope that the program will reboot in some meaningful way and my hoped for 2008 MCM certification will grandfather me in, in some way.
  • Because if bothered me that I didn't pass the first time
  • Because I think the certification will still be meaningful, even if the program is dead.
One interesting tidbit. The exam proctor, Ralph something or other, wish I'd remembered to write down his last name, mentioned that they've extended the program through the end of the year and waived the 90 day waiting period for exam retakes. Which means if I fail again, I'll have one more shot if I can come up with the exam fee, It's odd, though, that I've seen no announcements of this extension, other than a blurb on the MCM Advanced Certification page that states: "If you have outstanding Masters or Architect exams to complete, please contact our Advanced Certification team at advcert@microsoft.com to arrange to take your exams before December 31, 2013." 

In any event, I'll see where my chips fell here in 4-6 weeks. I feel like I did better than I did last time, but don't feel like I aced it. It's going to be close. 

Tuesday, March 19, 2013

Speaking at SQLSaturday #206 in Madison, WI

I'll be presenting on SQL Server Data Tools (SSDT) for Visual Studio 2012 at SQLSaturday #206 in Madison, WI at Madison Area Technical College on April 6th. I'll be in the 4PM timeslot in Room 142B.

This will be a very similar talk to the one I gave at FoxPASS in February. The revised abstract:

In 2007, Microsoft introduced Data Dude for Visual Studio 2005 to help bring the processes, controls and tools for code management to Database Development that Application developers had been enjoying for ages. It had its problems, but it mostly worked and was a welcome step in the right direction. Many versions later we have SQL Server Data Tools for Visual Studio 2012 (and 2010). A more mature product that finally brings database development up to par with traditional application development and has many features data geeks, like ourselves, will truly love. This talk will provide an overview of SSDT’s features and show you how to get started using it to save time, implement better code controls and be a better DBA or database developer.

Monday, March 18, 2013

Odd Behavior With .NET SQL Client Connection Pooling

This is an odd phenomenon that I encountered recently and wanted to share. I have some Powershell scripts that I use to drop and recreate databases that are used for testing some of our applications. The script first checks to see if the target database already exists and then drops it if it does. It checks first for open connections to the database (in sys.sysprocesses) and kills them if there are any (using the KillAllProcesses(TargetDB) method of the Microsoft.SqlServer.Management.Smo.Server object). What was happening is that for some databases, my check for active connections would return 0, but when the script would go to drop the database, it would fail saying there were active connections. Oddly, on the instances where it did work, the check for open connections would always return >0, even if I knew there were no open connections when the script was kicked off.

That last item got me thinking that the script itself was keeping connections open. Sure enough watching active connections while the script ran showed that when the script checked to see if the database already existed, the explicit close of the connection was not actually closing the connection. The quick fix was to disable connection pooling when opening the connection to check for the existence of the database. Doing so would actually close the connection when I tell it to close, rather than keeping it open for later re-use.


  $conn2 = New-Object System.Data.SqlClient.SqlConnection "Server=$TargetSQLInstance;Database=$TargetDatabase;Integrated Security=SSPI;Pooling=false;"
  try
    {
      $conn2.Open()
      $conn2.Close()
    }
  catch
    {
      CheckForErrors
    }

What I was seeing on the server where the script worked, was what the that after the existence check but before the drop, sp_who2 and sp_lock showed a .NET Client Connection from my workstation to the target database still open with a shared DB lock to the target database.
sp_who2:

sp_lock:

The script would see the open connection, ask me if I wanted to kill it, I'd say yes, the script would kill my own connection, and the drop would succeed. On the server where it did NOT work, here's what I'd see at the same spot in the script:
sp_who2:

sp_lock:

Notice that the connection left open by my existence check is to master, not the target database, but the connection still has a shared lock open on the database, so the drop would fail. it would seem that on the first instance of SQL Server, closed pooled connections remain in the database context in which they were opened, but on the second instance, it changes to the context of the master database when the connection is closed.

After trying this out on a bunch of instances here, I've determined that the behavior appears to be specific to the exact version of SQL Server.I don't have enough time to test a bunch of configs to see where the behavior changed, but I see the connection left open to master on 2008R2RTMCU9 and older and I see it on the target database on 2008R2SP1CU4 and newer. Somewhere in-between, the behavior seems to have changed.

Refactoring T-SQL using SSDT - Part 2


First, my apologies for the delay in posting. Life sometimes has a way of getting in the way of the best intentions. Rest assured it was not due to a lack of desire to do so, just a lack of time.

In this, the second part of my short series on Refactoring T-SQL in SSDT, we're going to go over fully qualifying object names and moving objects to a new schema. These two items are pretty closely related. For instance, perhaps you wish to move all of the objects for a particular application to its own schema. If you previously had not been fully qualifying your object names, you'll need to now.

3. Fully Qualify Object Names

When developing database code, if all your objects reside in the dbo schema, it is pretty easy to fall into the habit of not qualifying your object names. After all, the code works without them. There are good reasons to do so, though. For me the biggest reason is that it simply improves readability. But there are other technical reasons as well. For instance, at compile time, all the objects referenced in the code are resolved by a lookup. If you have not specified the schema, the query optimizer will first look in the users own schema or whatever their default schema is. That extra lookup, while small and only occurring at compile time, is still an extra lookup that could be avoided. Also, specifying the schema now will make changing that schema later, should it be necessary, much easier.

SSDT makes all of this easy. We can refactor a specific object or an entire project, depending on where we right-click. For some reason, however, we're not able to refactor all objects of a specific type (i.e. Stored Procedures) by right-clicking that folder in the solution explorer. For this example, let's refactor a single stored procedure. We first right click on the procedure in the Solution Explorer and select Refactor > Fully-qualify Names...


This will bring up the Preview Changes window just like we saw previously with the Expand Wildcard and Rename refactor tasks. Let's have a look at the kind of things that will be refactored here.


Note that it added the schema to the name of the stored procedure itself as well as to all table names within the stored procedure. It also added table aliases to items in the select list. Had the procedure called any user-defined functions, those would have been qualified as well. Clicking Apply will make the previewed changes to the stored procedure. It's that easy. One interesting thing to note is that references to Common Table Expressions will be fully qualified as well. For instance, if we refactor the MERGE statement used in the post-deployment script that loads the State table, we see that all the references to the CTE for the target of the merge, appropriately called 'Target', are now fully qualified.


4. Move an Object to a Different Schema

SSDT makes this operation a breeze. Previously, you'd have to change the schema of the object and then hunt down every reference to that object and change the schema in all those references. Now it can be done in a single operation. In this case, though, invoking the refactor dialog is a bit tricky. You can't just right-click the object in the Solution Explorer, you have to have the item open and then right click on the object's definition. For instance, let's move the State table to a new schema called 'Lookup'. We need to open the definition for the State table and then right click on the CREATE TABLE statement.



This will bring up the 'Move to Schema' dialog. I'll select 'Lookup' for the New Schema, make sure 'Preview Changes' is checked and click OK.

We're presented with the Preview Changes dialog and we can see that not only is it changing the table definition, but all the constraints on it as well as referencing procedures, functions and pre/post deployment scripts. How cool is this?!


Potential Gotchas

As mentioned above, just where you'll find the refactor menu item you're looking for can be difficult. Many items can be refactored only at the individual item or entire project level, and nothing in-between. Also, some items like the Rename (from Part 1) and Move to Schema refactor presented here are only accessible when right clicking on the object definition, not the object's file in the Solution Explorer.

Monday, February 11, 2013

Refactoring T-SQL using SSDT - Part 1

One of my favorite tools in my bag of SQL tricks is SQL Server Data Tools (SSDT) for Visual Studio. It's the successor to the old "DB Pro's" projects and can be a T-SQL junkie's best friend. I just recently presented on SSDT at the Feb FoxPASS meeting and I've submitted a similar talk to SQL Saturday #206 in Madison, WI on April 6th. Over the next few weeks, I'll be posting about some of my favorite features in SSDT as well as some of my biggest complaints and work-arounds. This is not an introduction to SSDT. For that I'd refer you any of a number of great SSDT resource on the web, including Gert Draper's great talk at TechEd.

Today I want to talk about Refactoring T-SQL code in SSDT. SSDT has a number of refactor options, I'll be going over two of them today and others later in the week.

1. Expand a Wildcard

We all recognize that SELECT * will only end in tears. SSDT, however, provides a quick and easy way to eliminate its use from a project and expand those wildcards into a nice list of columns. You can do so at both the project level or for just a single instance. To refactor a single instance, simply right click on the offending line and select Refactor > Expand Wildcards...
You're then given the opportunity to preview the changes that will be made. You can choose to apply or Cancel.
If you click Apply, you'll see that we now have expanded the wildcard to select all the columns of the table by name.
You can also expand all the wildcards in the entire project or solution by right-clicking on the project in the Solution Explorer.

It will search for all wildcards in the entire project or solution and you'll see all of them in the Preview Changes dialog.
It really is that simple! When I inherit a new application or am first given a database to deploy from a developer, one of the first things I do is use these refactor methods to root out some bad practices. SSDT also give us the ability to do static code analysis, but that's a topic for another post.

2. Rename a Column or Table

A task that should be fairly straightforward is to rename a table or column. Whether it's in order to bring a project in line with corporate naming standards or just because you changed your mind, renaming an object can often be an exercise in frustration. Finding all instances of the referenced object can involved querying sys.syscomments, sys.sysdepends and other tables and even then you may not find ALL of the references. SSDT does a great job of tracking dependencies and can make this job a snap. Simply go to the definition of the object you wish to rename, right click on it and select Refactor > Rename...
You are then presented with a dialog where you can enter the new name for the object. In this case, I'm renaming the "Birthday" column to "DOB".
When you click OK, much like with the Expand Wildcard refactor, you'll be presented with a Preview Changes dialog where you can see every object that SSDT tracked down that will need to change and the text of the change it will be making. In this case you can see that we're not only changing the table definition, but it found a stored procedure that refers to the column we're changing and has renamed the reference for us. Incidentally, it's the same stored procedure where we expanded the wildcard earlier. The most common opposition to expanding wildcards I've encountered is that it makes renaming columns difficult, but the Rename Refactor eliminates this excuse. Handy!.
After you click apply, the previewed changes will be made and you're all set. Or are you? Notice in the Solution Explorer a new file has appeared at the root of the project called "Sample_SSDT_refactorlog". The filename will be different depending on the name of the project (Sample_SSDT in this case), but you will have a new file here. What the heck is it?
Let's have a look. If we inspect the file, we'll see that it is an XML file that is a log of all the refactor commands that change the data structure of the project. Our Expand Wildcard is NOT in here, but the Column Rename IS.
Why is that? What is this file used for? It confused me as well, and since it had "Log" in the name, I just removed it as unnecessary cruft in the project. Then I went to deploy my changed project to my Dev-Integration server and the data that was in the renamed column disappeared. Whoops. It seems that SSDT and the Data Tier Application Framework (DACFX) use this file to know when a data-containing object is renamed, otherwise it would believe that the old object was dropped and an unrelated new object was created. In our case, DACFX will actually copy the data from the old column to the new if we leave this file in place. When a rename operation is done at Deploy-time, a new system table named __RefactorLog will appear in the database. It contains the Operation Key of the rename operation that has been applied and corresponds to the same Key in the log file in the project. This is so DACFX can keep track of what rename operations have occurred already and can apply any new ones that may conflict if not applied in the correct order. This is all pretty flipping cool, if you ask me.

Potential Gotchas

The sad reality, though, is that SSDT is not perfect. It can do a lot, but it can't do everything. One big thing you'll need to be aware of when doing these two refactor operations is that SSDT cannot refactor Dynamic SQL. In our sample project there is a stored procedure that uses dynamic SQL to build a WHERE clause. You'll see that it also uses a wildcard that did not get expanded. 
Dynamic SQL, to SQL Server and to SSDT is just a string. A string that is later passed to sp_executesql, but a string none-the-less. As such, SSDT can't be expected to be able to refactor it. This is a case where you'll have to be careful, and know your application and whether it uses dynamic SQL so you can refactor those bits by hand. 

Monday, February 4, 2013

Does a Database Restore 'Prime' the Buffer Pool?

A co-worker asked me one day last week if, when a database is restored, the restored pages are retained in the buffer pool. I immediately answered 'no', but I realized this was something I'd either always assumed or someone had told me once upon a time. I was sure I was right, but I wanted to remind myself of the how and why.

The first thing I did was to clear the buffer pool, restore a database and then checked the DMV sys.dm_os_buffer_descriptors to see if there were pages for that database in the bufferpool.

SELECT
    
(CASE WHEN ([database_id] = 32767) THEN N'Resource Database'
        
ELSE DB_NAME ([database_id]) END) AS [DatabaseName],
    
COUNT (*) * 8 / 1024 AS [Size_MB]
FROM sys.dm_os_buffer_descriptors
GROUP BY [database_id]
ORDER BY DB_NAME ([database_id]);


Sure enough, the amount of the buffer pool used by that database did not change and remained zero. In fact, if I perform the opposite test of doing some selects against the database to read pages into the buffer pool and then restore the database, we can see that again the used buffer pool for that database returns to zero after the restore. So what's going on? Why doesn't a restore 'prime' the buffer pool? In short, because the restored data pages never enter the buffer pool in the first place!

During backup and restore operations, SQL Server uses a separate memory area for buffering the data being handled. This buffer is NOT the buffer pool! In fact, this memory is allocated from the OS, outside of the server memory that you can allocate using the "max server memory" and "min server memory" settings. You can configure the size of this buffer using the BUFFERCOUNT and MAXTRANSFERSIZE options in the BACKUP and RESTORE commands. The total buffer size is BUFFERCOUNT*MAXTRANSFERSIZE. By default, MAXTRANSFERSIZE is 1MB and the BUFFERCOUNT is variable and version dependent, but can be calculated by the formula "NumberofBackupDevices* 3 + NumberofBackupDevices + (2 * NumberofVolumesInvolved)" for SQL 2005+. Details on this formula can be found here. In a very simple situation of a single backup device and a single volume, there would be 7 buffers for a total of 7MB used. Modifying these values can, in some cases, offer a performance benefit, but you'll need to test various scenarios in your environment in order to determine the optimal settings. In general, the benefits to be gained are minimal, but that doesn't mean it's not worth testing! Note that it is possible, if using large values, especially on 32-bit systems, to request more buffer space then the OS can deliver, in which case you'll receive the error:

Msg 701, Level 17, State 17, Line 1
There
is insufficient system memory to run this query.d]);


When testing, you can use trace flags 3605 & 3213 to have SQL Server write backup/restore statistics (including the values used for BUFFERCOUNT and MAXTRANSFERSIZE) to the error log. Also, Ben Snaidero wrote a great article talking about performance tuning backups.

Friday, February 1, 2013

Speaking at FoxPASS on Feb 6

I'll be presenting at the Monthly FoxPASS meeting next Wednesday Feb 6th. I'll be speaking on SQL Server Data Tools (SSDT) for Visual Studio 2012. I've submitted it to SQL Saturday #206 as well. Here is my entirely too wordy abstract:

Summary: Application development has for a long time had processes, controls and tools for code management; including source control, build management, code refactoring, deployment packaging and more. Management of database code has been an afterthought, perhaps keeping scripts in source control or using third-part schema comparison tools, but the times are changing. In 2007, Microsoft introduced Data Dude for Visual Studio 2005 to fill the gap. It was buggy and lacking in many respects, but it mostly worked and was a welcome step in the right direction. Many versions later we have SQL Server Data Tools for Visual Studio 2012 (and 2010). A much more mature product that finally brings database development up to par with traditional application development and has many features data geeks like ourselves will truly love. This talk will provide a quick overview of SSDT’s features and show you how to get started using it to save time, implement better code controls and be a better DBA or database developer.

Odd ADSI OpenQuery Performance Issue When Domain Controller was Retired

I recently ran into a Sharepoint/SSRS performance issue with slow report load times. That is, the report itself was slow to load before picking parameters and actually executing it. Once it did load, though, the report ran just fine. Some tracing determined that the hangup was on the population of the parameter lists for the report. The lists are dynamic, displaying specific values based on the executing user's Active Directory group memberships. The user's group membership are determined through a stored procedure that uses an ADSI OpenQuery to  interrogate AD.

Because AD groups can be nested or bundled with a role, the procedure processes groups/subgroups recursively. Our environment is pretty complex, so each time this procedure runs, it is often doing about 100 or so ADSI queries. Normally, this still completes in a fraction of a second and report load times are acceptable. Until about a week ago when suddenly report load times began taking about 2 minutes. Having traced it to this procedure, a statement-level Profiler trace showed that each ADSI query was taking about 1015ms, where in all of our test and development environments, they took about 15ms. Each ADSI query was taking almost exactly 1 second longer in our production environment. What the heck?

Grasping at straws at this point, I remembered from our change control meetings that a old Domain Controller had been retired about the same time the problem began. I theorized that SQL Server had cached the domain controller it used for these queries, even though the query was written to interrogate the domain, not a particular controller. Perhaps when this query was first run, it hit the now-retired controller, and now it was attempting to contact it first each and every time. Perhaps it was then timing out after 1s, and then retrying on another DC, which would succeed.

Since this was affecting our reporting system and not anything transactional, an after hours restart of the SQL service was simple to schedule, so we decided to try it. When in doubt, reboot, right? We hoped that when the service account logged in, it would pick up a new domain controller and the problem would go away. Sure enough, after the service restart, query times returned to the 15ms range and report load times returned to normal.

So, the question is, was our theory correct? Because we decided to test the theory by restarting the service instead of attempting to trace AD query traffic, I guess we'll never know, and that is bothering me now. I wish, in hindsight, that we had worked with our infrastructure team to trace that traffic and know for sure.

Our ADSI linked server is using the default timeout values (0), I've done some Googling to see if I could find a reference to ADSI timeouts being 1s by default, but found that it seems to be 30s by default. So, I'm at a loss to confirm that our theory was correct. Any additional thoughts on the matter are much appreciated!